The item count fluctuated between 1 and over 100, while administrative processing times spanned from under 5 minutes to more than an hour. Researchers utilized public records or targeted sampling to establish metrics related to urbanicity, low socioeconomic status, immigration status, homelessness/housing instability, and incarceration.
Promising though reported assessments of social determinants of health (SDoHs) may be, there persists a pressing need to cultivate and meticulously test brief, but validated, screening protocols that readily translate into clinical application. The use of novel assessment tools, comprising objective evaluations at the individual and community levels leveraging cutting-edge technology, and rigorous psychometric analyses for reliability, validity, and sensitivity to change alongside practical interventions, are proposed, and suggested training course structures are outlined.
Though the reported evaluations of social determinants of health (SDoHs) hold promise, there is a need to develop and thoroughly validate concise screening instruments suitable for implementation in clinical practice. New assessment instruments, including objective measures at the individual and community levels through advanced technology, alongside rigorous psychometric evaluations ensuring reliability, validity, and sensitivity to change, and supporting interventions, are recommended, and we offer suggestions for training curricula.
Unsupervised deformable image registration leverages the progressive design of networks, including pyramid and cascade architectures, for optimal performance. While progressive networks exist, they predominantly concentrate on the single-scale deformation field per level or stage, overlooking the consequential interrelationships across non-adjacent levels or phases. We detail, in this paper, a novel unsupervised learning approach, the Self-Distilled Hierarchical Network (SDHNet). In a multi-stage registration method, SDHNet generates hierarchical deformation fields (HDFs) synchronously in each iteration, the learned hidden state bridging the connection between iterations. The process of generating HDFs involves extracting hierarchical features using multiple parallel gated recurrent units, and these HDFs are subsequently adaptively fused based on their intrinsic properties and contextual image information. Additionally, diverging from standard unsupervised approaches that leverage solely similarity and regularization losses, SDHNet implements a novel self-deformation distillation strategy. This scheme leverages the final deformation field, distilled as teacher guidance, to place constraints on the intermediate deformation fields within their respective deformation-value and deformation-gradient spaces. SDHNet's performance surpasses state-of-the-art methods on five benchmark datasets, including brain MRI and liver CT, delivering faster inference times and minimizing GPU memory usage. For the SDHNet project, the code is hosted on the GitHub repository https://github.com/Blcony/SDHNet.
Supervised deep learning-based CT metal artifact reduction (MAR) methods frequently encounter a domain gap between simulated training data and real-world application data, leading to poor generalization from training simulations to actual data. Unsupervised MAR methods can be trained on real-world data directly, but their learning of MAR depends on indirect metrics, frequently leading to undesirable performance. To address the disparity between domains, we introduce a novel MAR approach, UDAMAR, rooted in unsupervised domain adaptation (UDA). immune deficiency A UDA regularization loss is implemented in a standard image-domain supervised MAR method, enabling feature-space alignment and effectively reducing the gap between simulated and practical artifacts' domains. We have designed an adversarial UDA method that focuses on a low-level feature space, which is specifically where the domain disparities between metal artifacts are most evident. UDAMAR is capable of learning MAR from simulated data with known labels while concurrently extracting critical information from unlabeled practical data. UDAMAR excels in experiments using clinical dental and torso datasets, outperforming both its supervised backbone and two leading unsupervised methodologies. Simulated metal artifacts and ablation studies form the basis for our careful examination of UDAMAR. The model's performance, as seen in simulation, closely parallels supervised methods, and demonstrates advantages over unsupervised methods, thus justifying its efficacy. Ablation experiments focusing on the influence from UDA regularization loss weight, UDA feature layers, and the quantity of practical training data employed provide further evidence for the robustness of UDAMAR. With a simple and clean design, UDAMAR is easily implemented. Optical immunosensor For practical CT MAR, these advantages make it a quite viable solution.
The past several years have witnessed the invention of numerous adversarial training techniques, all designed to strengthen deep learning models' resistance to adversarial attacks. Nonetheless, standard AT methods typically consider the training and testing datasets to be from the same distribution, with the training data labeled. The two primary assumptions supporting current adaptation methods break down, causing a failure to transfer learning from a source domain to an unlabeled target domain, or misinterpreting adversarial samples within that unexplored target space. Our initial consideration in this paper centers on this new and challenging problem, adversarial training in an unlabeled target domain. We next introduce a novel framework, Unsupervised Cross-domain Adversarial Training (UCAT), for the purpose of dealing with this problem. UCAT's approach to training effectively utilizes the knowledge of the labeled source domain, counteracting adversarial samples by using automatically selected high-quality pseudo-labels of the unlabeled target data, and utilizing robust anchor representations of the source domain data. The four public benchmarks' results show that UCAT-trained models display both a high level of accuracy and robust performance. The proposed components' effectiveness is substantiated by a comprehensive suite of ablation studies. At https://github.com/DIAL-RPI/UCAT, the source code is accessible to the public.
Video rescaling's practical applications, notably in video compression, have recently spurred significant interest. Unlike video super-resolution's concentration on upscaling bicubic-downscaled video, video rescaling methods optimize both the downscaling and upscaling stages through a combined approach. Yet, the inherent information loss incurred during downscaling persists as a challenge in the upscaling process. Additionally, the network structures of prior approaches heavily depend on convolution for accumulating data within local regions, hindering their ability to effectively represent the relationship between distant locations. To mitigate the previously discussed double-faceted problem, we propose a cohesive video rescaling framework, detailed through the following designs. To regularize the information within downscaled videos, we propose a contrastive learning approach that dynamically synthesizes hard negative samples for learning in an online fashion. see more The auxiliary contrastive learning objective fundamentally encourages the downscaler to preserve more information relevant to the upscaler's tasks. In high-resolution videos, the selective global aggregation module (SGAM) efficiently captures long-range redundancy. Only a few representative locations are dynamically selected to participate in the computationally intensive self-attention processes. SGAM finds the sparse modeling scheme's efficiency appealing, maintaining the global modeling capability of the SA model at the same time. The proposed video rescaling framework, dubbed Contrastive Learning with Selective Aggregation (CLSA), is presented. Detailed experimental outcomes showcase CLSA's superior performance compared to video scaling and scaling-based video compression approaches on five diverse datasets, leading in performance benchmarks.
Large erroneous regions commonly blemish depth maps, even in publicly available RGB-depth datasets. Learning-based depth recovery methods are presently constrained by the paucity of high-quality datasets, and optimization-based approaches commonly struggle to correct extensive errors because they rely excessively on localized contexts. The present paper describes an RGB-guided depth map recovery method built upon a fully connected conditional random field (dense CRF) model, which effectively combines local and global context information from both depth maps and corresponding RGB images. By applying a dense CRF model, the likelihood of a high-quality depth map is maximized, taking into account a lower-quality depth map and a reference RGB image as input. Redesigned unary and pairwise components, part of the optimization function, are used to constrain the local and global structures of the depth map, under the influence of the RGB image. Furthermore, the issue of texture-copy artifacts is addressed by employing two-stage dense conditional random field (CRF) models, progressing from a coarse to a fine level of detail. A first, approximate depth map is obtained through the embedding of an RGB image within a dense CRF model, which is configured in 33 discrete units. A refined result is obtained by embedding the RGB image into a distinct model, pixel by pixel, and primarily utilizing the model within non-contiguous regions afterward. Empirical analyses across six data sets highlight that the proposed technique substantially outperforms a dozen existing baselines in correcting erroneous areas and mitigating texture-copy artifacts in depth maps.
With scene text image super-resolution (STISR), the goal is to refine the resolution and visual impact of low-resolution (LR) scene text images, in order to concurrently optimize text recognition processes.