{"id":32296,"date":"2024-07-08T09:56:41","date_gmt":"2024-07-08T07:56:41","guid":{"rendered":"https:\/\/nr.no\/en\/?post_type=bc_area&#038;p=32296"},"modified":"2025-10-01T14:27:37","modified_gmt":"2025-10-01T12:27:37","slug":"deep-learning-for-complex-image-data","status":"publish","type":"bc_area","link":"https:\/\/nr.no\/en\/areas\/deep-learning-for-complex-image-data\/","title":{"rendered":"Deep learning for complex image data"},"content":{"rendered":"\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:66.66%\">\n<p><strong>Deep learning (DL) has revolutionised computer vision and significantly transformed the fields of image analysis and machine learning. We have been at the forefront of adapting DL for numerous applications and it has been integral to most of our projects in image analysis since 2018.  <\/strong><\/p>\n\n\n\n<p>NR contributes to research and development in image analysis, machine learning, and Earth observation. Our applications are diverse, spanning healthcare, transportation, ocean, climate and environment, infrastructure, and general mapping and monitoring. We use deep learning to analyse complex image data from a range of sources, including ultrasound sensors, MRI machines, echo sounders, seismic sensors, multi-spectral and hyperspectral sensors, and synthetic aperture radar (SAR) satellite sensors. <\/p>\n\n\n\t<div class=\"nr-spacer nr-spacer-small wp-block-nr-spacer\">\n\t<\/div>\n\t\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h2 class=\"wp-block-heading\">Deep learning exceeds human performance<\/h2>\n\n\n\n<p>Today, deep learning is the leading approach for solving most image analysis problems, often achieving accuracies that surpass human performance. We have successfully applied deep learning to a wide range of real-world applications, with many of our solutions in operational use every day.<\/p>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"913\" height=\"383\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/threeimages-deeplearning.png\" alt=\"Three images displayed DL capabilities. The first image is an interpretation of seismic data and is shown in layered colours, the second is snow map of Norway, the third is an x-ray of a breast showing breast tissue and a localised area marked in red where there may be a potential abnormality.\" class=\"wp-image-32346\" srcset=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/threeimages-deeplearning.png 913w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/threeimages-deeplearning-300x126.png 300w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/threeimages-deeplearning-768x322.png 768w\" sizes=\"auto, (max-width: 913px) 100vw, 913px\" \/><figcaption class=\"wp-element-caption\"><em>Deep learning can be applied to solve a variety of urgent problems. From left to right: 1. Interpreting seismic data. Image: Equinor\/NR.&nbsp; 2. Daily mapping of snow cover in Norway using satellite images. Image: ESA\/NR. 3. Detecting cancerous growth in breast cancer screening images. Image: The Cancer Registry of Norway\/NR.<\/em><\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Research challenges<\/strong><\/h3>\n\n\n\n<p>Learning and applying deep learning comes with several challenges. Annotating complex image data requires expert knowledge and is both costly and time-consuming. While machine learning methods are data-driven, incorporating human knowledge, constraints and physics is often necessary. To ensure trustworthiness, it is crucial that the models acknowledge their own limitations. Additionally, due to their complexity, the models often lack clarity and transparency during interference.<\/p>\n\n\n\n<p>Our research encompasses the following areas:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Learning from limited data: Utilising methods such as self-supervised learning to effectively explore seismic data.<\/li>\n\n\n\n<li>Incorporating context, dependencies and prior knowledge: Enhancing ultrasound analysis by leveraging graphs or improving thematic mapping of coastal habitats from airborne imagery using class hierarchies.<\/li>\n\n\n\n<li>Estimating confidence and quantifying uncertainties: Developing techniques to estimate pixel-wise uncertainty in snow maps. <\/li>\n\n\n\n<li>Understanding predictions: Providing explanations for predictions, such as age estimates of fish.<\/li>\n<\/ul>\n\n\n\t<div class=\"nr-spacer nr-spacer-small wp-block-nr-spacer\">\n\t<\/div>\n\t\n\n\n<h2 class=\"wp-block-heading\">Visual Intelligence <\/h2>\n\n\n\n<p>These challenges are also central to Visual Intelligence, a Center for Research-based Innovation (SFI). This centre is a collaborative effort involving the host institution, UiT The Arctic University of Norway, NR and the University of Oslo (UiO). Together, we are committed to research-driven innovation, focusing on adapting and applying deep learning to complex image data, in partnership with a number of user groups.<\/p>\n\n\n\n<p>We also actively support related research areas such as:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>AI for small devices and large GPU-clusters,<\/li>\n\n\n\n<li>super-resolution and generative AI,<\/li>\n\n\n\n<li>multimodality &#8211; integrating text, images and other data,<\/li>\n\n\n\n<li>foundation models for handling complex data.<\/li>\n<\/ul>\n\n\n\t<div class=\"nr-spacer nr-spacer-small wp-block-nr-spacer\">\n\t<\/div>\n\t\n\n\n<h3 class=\"wp-block-heading\"><strong>Learning from limited data<\/strong> <\/h3>\n\n\n\n<p>Deep learning methods steadily improve as more training data becomes available. However, in practical applications like medical and satellite imaging, obtaining high-quality annotated data is a significant challenge.  Annotations are important as they provide labels to guide the learning process, but they are often incomplete or inconsistent. Furthermore, the annotations have typically been made for purposes other than training machine learning models, which can make them less suitable for this purpose. <\/p>\n\n\n\n<p>In response to these challenges, our research explores innovative methods such as self-supervised learning, leveraging weak labels and domain adaptation. These strategies can help train our models effectively, even when handling suboptimal datasets. <\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"913\" height=\"383\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/06\/Detection-of-changed-buildings-BAMJO.png\" alt=\"The photo shows two aerial images of buildings side by side . The areas delimited by buildings are outlined in yellow.\" class=\"wp-image-32313\" srcset=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/06\/Detection-of-changed-buildings-BAMJO.png 913w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/06\/Detection-of-changed-buildings-BAMJO-300x126.png 300w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/06\/Detection-of-changed-buildings-BAMJO-768x322.png 768w\" sizes=\"auto, (max-width: 913px) 100vw, 913px\" \/><figcaption class=\"wp-element-caption\"><em>Using self-supervised learning to identify changes in building structures between 2018 and 2020 without the need for labeled training data. Image: Field Group\/NR. <\/em><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Understanding deep learning predictions<\/h2>\n\n\n\n<p>It is not always clear why a deep learning model makes a specific prediction. While the models typically work well  across a range of scenarios, they can make mistakes that seem odd or are even harmful. Understanding the models&#8217; behaviour is crucial, especially when applied in sensitive areas such as healthcare and industry. <\/p>\n\n\n\n<p>At NR, our research focuses on developing methods that can explain predictions and accurately estimate the uncertainty associated with them. For example, our recent work in breast cancer screening involved explainability analysis to determine which parts of an image most significantly influence the model&#8217;s decision to identify a tumour. <\/p>\n\n\n\t<div class=\"nr-spacer nr-spacer-small wp-block-nr-spacer\">\n\t<\/div>\n\t\n\n\n<h2 class=\"wp-block-heading\">Incorporating dependencies and prior knowledge<\/h2>\n\n\n\n<p>Machine learning excels by learning directly from data, rather than using pre-defined models. However, processing complex data requires a combination of traditional machine learning and physical or geometrical models. This integrated approach allows us to include prior knowledge and dependencies and manage multiple types of complex images at the same time. <\/p>\n\n\n\n<p>Despite their strengths, many deep learning models struggle to incorporate context and prior knowledge, such as topological or physical properties. We are researching methods to address these limitations. One example is our exploration of graph convolutional networks (GCN) as a way to model the relative positions of key features in medical images. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"471\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/predictionscanner-deeplearning-1024x471.png\" alt=\"Two images displaying landmark positioning from ultrasound images. The images look similar to x-rays and positioning is marked in various colours.\" class=\"wp-image-32345\" srcset=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/predictionscanner-deeplearning-1024x471.png 1024w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/predictionscanner-deeplearning-300x138.png 300w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/predictionscanner-deeplearning-768x353.png 768w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/predictionscanner-deeplearning.png 1386w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\"><em>Landmark positioning accuracy improves when using a graph-based method (right image) compared to the standard approach (left). Predicted points are shown in darked colours and manually annotated landmarks are in light colours. Image: GE Vingmed Ultrasound\/NR.<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Estimating confidence and uncertainties in predictions<\/h2>\n\n\n\n<p>Deep neural networks are powerful predictive models, but they sometimes fail to recognise erroneous predictions  or whether the input data falls outside their reliable range. For critical or automatic applications wherein mistakes may have serious consequences, this understanding is essential. For example, in our coastal habitat mapping projects, our approach not only classifies data but also provides maps conveying prediction credibility. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"945\" height=\"322\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-maps.png\" alt=\"Two images side by side showing regional coastal mapping. Various elements and features are distinguished by colours.\" class=\"wp-image-32344\" srcset=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-maps.png 945w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-maps-300x102.png 300w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-maps-768x262.png 768w\" sizes=\"auto, (max-width: 945px) 100vw, 945px\" \/><figcaption class=\"wp-element-caption\"><em>Predictions of different coastal habitat classes are shown on the left, with the credibility of these predictions (p-values) displayed on the right. Areas with the lowest credibility are marked in red. Image: NR.<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">AI for small devices and large GPU clusters<\/h2>\n\n\n\n<p>Deep neural networks require substantial computing power. At NR, we develop models that cater to a range of hardware configurations. For clients requiring models that can operate on battery-powered devices like those used in edge AI, we prioritise compactness and efficiency. For others, we may develop models that can leverage the power of large GPU clusters, maximising performance on multiple computers. As a leading institute in computer science, we ensure our models and their corresponding data pipelines are optimised for suitability and efficiency on any given platform.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Super resolution and generative AI<\/h2>\n\n\n\n<p>Diffusion models are transforming how we generate images, from everyday objects and scenic landscapes to creating realistic human faces. We use these models to improve the resolution of satellite images. This process is known as super resolution and is particularly well-suited to these types of images. High-resolution satellite images are vital to environmental monitoring, urban planning and disaster management. By improving image clarity,  diffusion models provide decision-makers with more precise data, thereby enhancing both analysis and decision-making processes. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"685\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-6heatmaps-1024x685.png\" alt=\"Six satellite images displaying how resolution is improved with diffusion models. The colours range from green, red to brown.\" class=\"wp-image-32343\" srcset=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-6heatmaps-1024x685.png 1024w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-6heatmaps-300x201.png 300w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-6heatmaps-768x514.png 768w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/deeplearning-6heatmaps.png 1361w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\"><em>Enhancing satellite image resolution with diffusion models. Top row: Original images. Bottom row: Images with improved resolution achieved using a diffusion model. Images: WorldStrat \/NR.<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h3 class=\"wp-block-heading\">Combing text, images and other data<\/h3>\n\n\n\n<p>Deep learning models are versatile and can process various types of data at the same time, including images, text, audio and additional metadata. This capability enables the development of models that can, for example, generate textual descriptions from images. We harness this technology to provide comprehensive solutions for clients who need to analyse different types of data related to the same phenomena.<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"266\" height=\"133\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/07\/cloudinthesky-deeplearning.png\" alt=\"The image shows clouds in a sunlit sky.\" class=\"wp-image-32347\" style=\"width:450px\" \/><figcaption class=\"wp-element-caption\"><em>We used an open-source multimodal text-image mode (CLIP) to assess the similarity between an image from a webpage and its caption to ensure that the webpage is compliant with WCAG accessibility guidelines. Here,  the image-caption similarity was 99%. Caption: &#8220;Panoramic picture of a cloud that can be classified as cumulo-nimbus in the reddish\/orange light of a setting sun.\u201d  Image: Thennicke\/Wikimedia Commons.<\/em><\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Foundation models<br><\/h3>\n\n\n\n<p>Training deep learning models typically requires extensive data with detailed annotations, a process that is often expensive and time-consuming. Recent advancements like self-supervised learning have begun to address these challenges. One example is training models on partially hidden data and subsequently training the models to reconstruct the data that has been concealed. This approach helps models develop an understanding of the data without prior annotation. Known as foundation models, these systems form a base that can be further refined in order to solve specific tasks. Importantly, these models can be trained with much less data than previously required. At NR, we not only leverage existing foundation models but also train new ones for types of data where none currently exist. <\/p>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\" style=\"flex-basis:33.33%\">\n<p class=\"has-text-align-left\"><strong>To learn more about deep learning for complex image data, please contact:<\/strong><\/p>\n\n\n\t\t<div id=\"post-type-multi-block_e3ed1b4f1d59b3b5127883f08dae4c7f\" class=\"wp-block-post-type-multi type-manual style-card-bc_employee t2-grid\">\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-6\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/employees\/line-eikvil\/\" class='card-employee'>\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2026\/02\/line-eikvil-4.jpg\" alt=\"\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-employee__content\">\n\t\t\t<p class=\"card-employee__name\">Line Eikvil<\/p>\n\t\t\t\t\t\t\t<p class=\"card-employee__position\">Research Director<\/p>\n\t\t\t\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" height=\"24\" width=\"24\" class=\"t2-icon t2-icon-arrowforward\" aria-hidden=\"true\" focusable=\"false\"><path d=\"M15.9 4.259a1.438 1.438 0 0 1-.147.037c-.139.031-.339.201-.421.359-.084.161-.084.529-.001.685.035.066 1.361 1.416 2.947 3l2.882 2.88-10.19.02c-8.543.017-10.206.029-10.29.075-.282.155-.413.372-.413.685 0 .313.131.53.413.685.084.046 1.747.058 10.29.075l10.19.02-2.882 2.88c-1.586 1.584-2.912 2.934-2.947 3-.077.145-.085.521-.013.66a.849.849 0 0 0 .342.35c.156.082.526.081.68-.001.066-.035 1.735-1.681 3.709-3.656 2.526-2.53 3.606-3.637 3.65-3.742A.892.892 0 0 0 23.76 12a.892.892 0 0 0-.061-.271c-.044-.105-1.124-1.212-3.65-3.742-1.974-1.975-3.634-3.616-3.689-3.645-.105-.055-.392-.107-.46-.083\"\/><\/svg>\n\t\t<\/div>\n\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-6\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/employees\/rune-solberg\/\" class='card-employee'>\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/05\/rune-solberg-16.jpg\" alt=\"\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-employee__content\">\n\t\t\t<p class=\"card-employee__name\">Rune Solberg<\/p>\n\t\t\t\t\t\t\t<p class=\"card-employee__position\">Research Director<\/p>\n\t\t\t\t\t\t<svg xmlns=\"http:\/\/www.w3.org\/2000\/svg\" viewBox=\"0 0 24 24\" height=\"24\" width=\"24\" class=\"t2-icon t2-icon-arrowforward\" aria-hidden=\"true\" focusable=\"false\"><path d=\"M15.9 4.259a1.438 1.438 0 0 1-.147.037c-.139.031-.339.201-.421.359-.084.161-.084.529-.001.685.035.066 1.361 1.416 2.947 3l2.882 2.88-10.19.02c-8.543.017-10.206.029-10.29.075-.282.155-.413.372-.413.685 0 .313.131.53.413.685.084.046 1.747.058 10.29.075l10.19.02-2.882 2.88c-1.586 1.584-2.912 2.934-2.947 3-.077.145-.085.521-.013.66a.849.849 0 0 0 .342.35c.156.082.526.081.68-.001.066-.035 1.735-1.681 3.709-3.656 2.526-2.53 3.606-3.637 3.65-3.742A.892.892 0 0 0 23.76 12a.892.892 0 0 0-.061-.271c-.044-.105-1.124-1.212-3.65-3.742-1.974-1.975-3.634-3.616-3.689-3.645-.105-.055-.392-.107-.46-.083\"\/><\/svg>\n\t\t<\/div>\n\t<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\n\n\t<div class=\"nr-spacer nr-spacer-small wp-block-nr-spacer\">\n\t<\/div>\n\t\n\n\n<div class=\"wp-block-group has-background\" style=\"background-color:#d2f1f3\">\n<p><strong>Research centres<\/strong><\/p>\n\n\n\n<p>We are part of&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/nr.no\/en\/areas\/visual-intelligence\/\" target=\"_blank\">Visual Intelligence<\/a>&nbsp;\u2013<br>a Centre for Research-Based Innovation hosted by UiT The Arctic University of Norway.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img loading=\"lazy\" decoding=\"async\" width=\"960\" height=\"291\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2025\/08\/vi_blaa.png\" alt=\"Visual Intelligence logo in black\" class=\"wp-image-35413\" srcset=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2025\/08\/vi_blaa.png 960w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2025\/08\/vi_blaa-300x91.png 300w, https:\/\/nr.no\/content\/uploads\/sites\/2\/2025\/08\/vi_blaa-768x233.png 768w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/><\/figure>\n<\/div>\n\n\n\t<div class=\"nr-spacer nr-spacer-small wp-block-nr-spacer\">\n\t<\/div>\n\t\n\n\n<div class=\"wp-block-group has-background\" style=\"background-color:#cdf1f1\">\n<p><strong>Publications<\/strong><\/p>\n\n\n\n<p>Brautaset, O., Waldeland, A. U., Johnsen, E., Malde, K., Eikvil, L., Salberg, A.-B., &amp; Handegard, N. O. (2020). Acoustic classification in multifrequency echosounder data using deep convolutional neural networks. <em>ICES Journal of Marine Science, 77<\/em>(4), 1391-1400. <a href=\"https:\/\/doi.org\/10.1093\/icesjms\/fsz235\">https:\/\/doi.org\/10.1093\/icesjms\/fsz235<\/a><\/p>\n\n\n\n<p>Gilbert, A. D., Holden, M., Eikvil, L., Aase, S. A., Samset, E., &amp; McLeod, K. (2019). Automated left ventricle dimension measurement in 2D cardiac ultrasound via an anatomically meaningful CNN approach. In <em>Smart Ultrasound Imaging and Perinatal, Preterm and Paediatric Image Analysis: First International Workshop, SUSI 2019, and 4th International Workshop, PIPPI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13 and 17, 2019, Proceedings<\/em> (pp. 29-37). Springer International Publishing. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.1911.02448\">https:\/\/doi.org\/10.48550\/arXiv.1911.02448<\/a><\/p>\n\n\n\n<p>Gilbert, A. D., Holden, M., Eikvil, L., Rakhmail, M., Babic, A., Aase, S. A., Samset, E., &amp; McLeod, K. (2021). User-intended Doppler measurement type prediction combining CNNs with smart post-processing. <em>IEEE Journal of Biomedical and Health Informatics, 25<\/em>(6), 2113-2124. <a href=\"https:\/\/doi.org\/10.1109\/JBHI.2020.3029392\">https:\/\/doi.org\/10.1109\/JBHI.2020.3029392<\/a><\/p>\n\n\n\n<p>Kampffmeyer, M., Salberg, A.-B., &amp; Jenssen, R. (2016). Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks. In <em>2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas<\/em>. <a href=\"https:\/\/doi.org\/10.1109\/CVPRW.2016.90\">https:\/\/doi.org\/10.1109\/CVPRW.2016.90<\/a><\/p>\n\n\n\n<p>Liu, Q., Kampffmeyer, M., Jenssen, R., &amp; Salberg, A.-B. (2021). SCG-Net: Self-constructing graph neural networks for semantic segmentation. <em>International Journal of Remote Sensing, 42<\/em>(16), Article 2021. <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2009.01599\">https:\/\/doi.org\/10.48550\/arXiv.2009.01599<\/a><\/p>\n\n\n\n<p>Thorvaldsen, G., Pujadas-Mora, J. M., Andersen, T., Eikvil, L., Llad\u00f3s, J., Forn\u00e9s, A., &amp; Cabr\u00e9, A. (2015). A tale of two transcriptions: Machine-assisted transcription of historical sources. <em>Historical Life Course Studies, 2<\/em>, 1-19. <a href=\"https:\/\/hlcs.nl\/article\/view\/9355\/9854\">https:\/\/hlcs.nl\/article\/view\/9355\/9854<\/a><\/p>\n\n\n\n<p>Trier, \u00d8. D., Salberg, A.-B., &amp; Pil\u00f8, L. (2018). Semi-automatic mapping of charcoal kilns from airborne laser scanning data using deep learning. In <em>CAA2016: Oceans of Data. Proceedings of the 44th Conference on Computer Applications and Quantitative Methods in Archaeology<\/em> (pp. 219-231). Oxford: Archaeopress. <a href=\"https:\/\/doi.org\/10.1016\/j.jag.2020.102241\">https:\/\/doi.org\/10.1016\/j.jag.2020.102241<\/a><\/p>\n\n\n\n<p>Waldeland, A. U., Trier, \u00d8. D., &amp; Salberg, A. (2022). Forest mapping and monitoring in Africa using Sentinel-2 data and deep learning. <em>International Journal of Applied Earth Observation and Geoinformation, 111<\/em>, Article 102840. <a href=\"https:\/\/doi.org\/10.1016\/j.jag.2022.102840\">https:\/\/doi.org\/10.1016\/j.jag.2022.102840<\/a><\/p>\n<\/div>\n<\/div>\n<\/div>\n\n\n\n<p class=\"has-text-align-center\"><strong>Selected projects <\/strong><\/p>\n\n\n\t\t<div id=\"post-type-multi-block_401a01fa0e3385d07ae44aaf6cee131e\" class=\"wp-block-post-type-multi type-manual style-card-bc_project-sm t2-grid\">\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/a-foundation-model-for-smarter-climate-action-fm4cs\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2025\/05\/Sentinel-5.jpg\" alt=\"This is a satellite image of central Norway and Sweden showing snow cover. The image colours are black, various shades of blue and purple, and of course white.\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Earth observation<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">THOR: A foundation model for smarter climate action (FM4CS)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/seabee\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2023\/07\/dion-tavenier-AwH-d-FUgwo-unsplash.jpg\" alt=\"Automated analysis of drone data\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Earth observation<\/li>\n\t\t\t\t\t\t\t\t\t\t\t<li>Climate and Environment<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">Automated analysis of drone data (SeaBee)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/automating-railway-inspections\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/12\/Autokontroll-tog-eds-1.jpg\" alt=\"The images shows a green Vy train on the railway in a Norwegian landscape. It illustrates how cameras are mounted on the front of the vehicle and what the cameras can capture (shown with different coloured triangles). The landscape is a typical Norwegian landscape with a wooden house in the background, wooded areas and snow on the ground.\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Image analysis<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">Automating railway inspections (AutoKontroll)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/breast-cancer-detection-with-machine-learning\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2023\/09\/national-cancer-institute-W2OVh2w2Kpo-unsplash-scaled-1-scaled.jpg\" alt=\"The image shows stress fibres and microtubules in human breast cancer. Image by: Christina Stuelten, Carole Parent, 2011\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Image analysis<\/li>\n\t\t\t\t\t\t\t\t\t\t\t<li>Machine learning<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">Breast cancer detection with machine learning (MIM)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/deli\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/02\/arno-senoner-sf4YyPxoCvI-unsplash-scaled.jpg\" alt=\"deep learning-based methods for interpreting seismic data\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Image analysis<\/li>\n\t\t\t\t\t\t\t\t\t\t\t<li>Machine learning<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">Deep learning for seismic data (DELI)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/incus\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2024\/02\/kenny-eliason-MEbT27ZrtdE-unsplash-scaled.jpg\" alt=\"\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Image analysis<\/li>\n\t\t\t\t\t\t\t\t\t\t\t<li>Machine learning<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">Intelligent cardiac ultrasounds (INCUS)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/knowearth-machine-learning-and-human-knowledge\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2023\/11\/bryan-rodriguez-BckdUV5HFlc-unsplash-1-scaled.jpg\" alt=\"An aerial shot of an ice sheet.\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Earth observation<\/li>\n\t\t\t\t\t\t\t\t\t\t\t<li>Climate and Environment<\/li>\n\t\t\t\t\t\t\t\t\t\t\t<li>Mapping and map revision<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">Machine learning and human knowledge (KnowEarth)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<div class=\"t2-grid-item-col-3\">\n\t\t\t\t\t\t<a href=\"https:\/\/nr.no\/en\/projects\/fully-automated-bridge-inspections\/\" class=\"card-post card-project\">\n\t\t\t\t\t<figure>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/nr.no\/content\/uploads\/sites\/2\/2023\/11\/martin-fahlander-GqilGUeeO6Q-unsplash-scaled.jpg\" alt=\"The image is a landscape shot of the bridge in Atlanterhavsvegen on the west coast of Norway. The bridge crosses a stretch of water and the land is bare and arctic. The sky is cloudy and grey.\">\n\t\t\t<\/figure>\n\t\t\t\t<div class=\"card-post__content\">\n\t\t\t\t\t\t\t<ul class=\"card-post__categories\">\n\t\t\t\t\t\t\t\t\t\t\t<li>Earth observation<\/li>\n\t\t\t\t\t\t\t\t\t<\/ul>\n\t\t\t\t\t\t<h3 class=\"card-post__title\">Monitoring critical infrastructure using drones (InfraUAS)<\/h3>\n\t\t<\/div>\n\t<\/a>\n\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t","protected":false},"featured_media":32347,"parent":0,"menu_order":7,"template":"","meta":{"_acf_changed":false,"_trash_the_other_posts":false,"editor_notices":[],"footnotes":""},"class_list":["post-32296","bc_area","type-bc_area","status-publish","has-post-thumbnail"],"acf":[],"_links":{"self":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area\/32296","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area"}],"about":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/types\/bc_area"}],"version-history":[{"count":5,"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area\/32296\/revisions"}],"predecessor-version":[{"id":36046,"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/bc_area\/32296\/revisions\/36046"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/media\/32347"}],"wp:attachment":[{"href":"https:\/\/nr.no\/en\/wp-json\/wp\/v2\/media?parent=32296"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}