Object-based Image Analysis (OBIA)

Welcome and Introduction

Stefan LANG & Dirk TIEDE

[1] Welcome [2] Learning objectives [3] Content overview [4] Objects in our life [5] Example: landscape objects [6] OBIA - information update [7] What is / what does OBIA [8] Two related pillars [9] (Very) high resolution data

856.908 | University of Salzburg | Department of Geoinformatics | (c) 2010-2022

[1.1] Welcome

Preamble

  • Welcome to the world of "object-based image analysis"!

  • This course was designed for students to study the content on their own. The required material is available through the course (web-)site, documents are accessible from a single entry point. Multimedia content requires a respective software environment (movies, audio).

  • The structure of the course is twofold comprising a theoretical body and a set of hands-on exercises. While some ot the exercises are designed for the use of a commercial software package (eCognition), they can be accomplished with an evaluation copy (for free).

  • We acknowledge any feedback on content, structure, or design in order to keep the material updated and appealing for future student generations.



  • Explore & Enjoy!

Contact

Dr Stefan Lang & Dr Dirk Tiede, Professors at Department of Geoinformatics (Z_GIS), Faculty of Digital and Analytical Sciences (DAS), Paris-Lodron University of Salzburg (PLUS) Schillerstr. 30, 5020 Salzburg, Austria. {stefan.lang; dirk.tiede}@plus.ac.at +43-662-8044-7510

Disclaimer

  • IPRs All intellectual property rights relating to the content of the course, its structure and chapter headings, the figures and illustrations, the written and oral statements, as well the conceptual design of the exercises and examples, are with the authors.

  • Lineage A first version of this course was built based on the PhD work of S Lang and teaching material for courses on remote sensing delivered by S Lang and T Blaschke between 2002 and 2006. The present version 2.0 is based on material from University courses taught by S Lang and D Tiede, including an online OBIA course from 2010 onwards. The course also contains conceptual material originating from Master theses done at Z_GIS in the field of OBIA, notably from F Albrecht and M Hagenlocher. The latest update and conversion to reveal.js has been accomplished with support of A Schlagbauer and V Streifeneder between 2020 and 2022, as a contribution to the EO4GEO Skills Alliance.

  • Usage The course shall be used by students to self-achieve the given learning objectives. Using the material in the broader academic context is encouraged, when properly cited.

  • Reuse and reproduction The online material of this course (including slide-deck and explaining text) has been released under a creative-commons share-alike principle. All rights remain with the authors. Please contact us for granting access to the Git repository of this course.

  • Liability Authors do not accept any liability for misuse or inappropriate usage of the described techniques, nor for whatsoever failure when applying these in academic, professional, or any other context.

[1.2] Learning objectives

Learning objectives

  • This course aims at introducing the field of object-based image analysis providing a comprehensive overview of the techniques and theoretical background.

  • Specific learning objectives include:

    1. Remember how the approach of object-based image analysis has emerged from bridging remote sensing and geoinformatics

    2. Understand the scientific foundations (hierarchy theory, human perception, knowledge representation, computer vision) for gaining comprehensive background knowledge for applying the tools.

    3. Apply various tools and techniques in geographical applications and related fields.

    4. Compare OBIA methods to pixel-based remote sensing concepts (e.g. multispectral classification, accuracy assessment) and understand how the latter needs to be adapted

    5. Practice OBIA workflows in a dedicated software environment mimicking real-world scenarios.

Course structure

  • The course contains both theoretical background and practical exercises:

  • Theoretical chapters

    • Oral explanations, commented slide sequences, and various sources of other material

    • Designed as a foundation for handling the challenges of practical exercises.

  • Practical exercises

    • Self-explanatory with step-by-step instructions for getting acquainted with relevant software and tools.

    • Designed to demonstrate potential and limitations of OBIA and to develop a critical thinking towards the presented routines.

[1.3] Content overview

Table of contents

  • This course comprises the following main chapters:
    1. Introduction
    2. Image interpretation and perception
    3. Basic concepts of hierarchy theory
    4. Knowledge representation
    5. Image segmentation
    6. Object-based classification (incl.
      class modelling )
    7. Accuracy assessment (incl.
      object validity
      )
    8. OBIA for non-image data
CourseContent

Assignments

  • Course assignments are based on three components
    1. Answering conceptual/theoretical questions
    2. A (short) literature review
    3. The completion (and documentation) of practical exercises

    40%

    20%

    40%

[1.4] Objects in our life

Objects in our life

  • We deal with objects constantly in our lives
  • What are ‘objects’? (lat. obicere, ‘to offer’, ‘to reply’, ‘to object to’)
  • In contrast to abstract ideas, objects are conceptual wholes
  • Professionals have specific sets of objects
  • Objects reduce complexity – again in both private and professional life
  • Objects can be grouped into classes or categories professionals are interested in.
  • Objects form a hierarchy

    Why are numbers not considered objects? When and how do these numbers turn into meaningful representations of real-world units?

Objects in our life

  • Forest structure
    • Characterized by position of trees,vertical layering, tree species mixture, age classes mixture
    • Spatial explicit characteristics of arrangement
    • A crucial aspect for e.g. protection function evaluation
    • Key parameter for assessing forest integrity in general
    • Maier et al. 2000, Lang t al. 2006
      Protection forest in Vorarlberg, Austria

Objects in our life

  • Forest stands
    • Different forest structures including spacious or closed forests, multiple tree canopy layers
    • Scale-specific delineation of stands

    Different forest stand types in the Bavarian Forest national park, Germany

expert delineation

automated delineation

Tiede et al. 2004, Lang et al. 2006

[1.5] Example: landscape objects

Landscape objects

Landscape (objects) as viewed from a plane. Depending on the application domain or purpose a different set of objects (or target classes) may be of interest.

Landscape objects

Landscape objects

Landscape objects

Landscape objects

"As a spatial planner I am interested in the distribution and types of settlements. I am looking at connectivity through roads and green spaces in between…"
"As an ecologist I distinguish more natural landscapes from areas heavily altered by humans…"
"As a tourism promoter I consider the entire region a rural area…"

Landscape and objects

What makes up a settlement?
  • size...
  • structure...
  • the constituing elements...
  • the way it is situated?
What makes up a natural river system?
  • the meanders...
  • the distance from anthropogenic features...
  • the size?
What defines a intensively used agricultural area?
  • the absence of anything else but crop fields...
  • the dominance of a specific crop field?

OBIA information update

  • Derives recent information from (usually) image data
  • Is capable of integrating all kinds of spatial data to support this task (not necessarily confined to geographical scales)
  • Our focus: OBIA in the context of geographical applications → GEOBIA
    • integrates different target geometries
    • ... spatial information statistically
    • ... sensors and other continuous measurements;
    • models complex classes;
    • provides (semi-)automated routines.
    A number of additional geospatial layers supporting image analysis, integrated by OBIA

    OBIA information update

    • Some composite objects that cannot be directly delineated, but they need to be ‘modelled’ .
    • Here a biotope-complex class “mixed arable land” is modelled
    • This intermediate stage shows the constituting elements used
    • From a user’s point of view, this product is ‘use-less’ , as the information is up-to-date, but not, provided in the required form.
    • Elementary landscape units for class modelling. SPOT 5 image from Sept 2006, Credits: © Z_GIS.

    OBIA information update

    • Now the ‘mixed-arable land’ object is fully valid (from user’s point of view)
    • It is composed by a reasonable mix of grassland and crop fields
    • It obeys the average size constraints (2 ha)
    • It is fully congruent with admin boundaries (her: cadaster)
    • Final result of class modelling including boundary optimization. Credits: © Z_GIS.

    Quotation

    Lang, S. (2008). Object-based image analysis for remote sensing applications: modeling reality – dealing with complexity. In T. Blaschke, S. Lang & G. J. Hay (Eds.), Object-Based Image Analysis - Spatial concepts for knowledge-driven remote sensing applications (pp. 3-28). Berlin: Springer.

    [1.7] What is / what does OBIA

    What is/ What does OBIA?

    • Bridging methods and techniques from (formerly distinct) realms GIS and Remote Sensing
    • So is it a paradigm shift ?
      • Yes in the sense of addressing research questions more efficiently (more intelligently?)
    • Scarcity in data? Not really, but need for adapted methods, e.g. for validation of OBIA results
  • OBIA – what is it for?
    • (what we would like to achieve) to represent complex scene content to best describe the imaged reality best and to understand a maximum of the respective content, to extract it and to convey it to users or researchers.
    • (how we do it) OBIA combines spatial concepts with signal processing for (semi‐)automated image analysis, that works on objects, rather than on pixels

    [1.8] Two related pillars...

    Two related pillars...

    • Image segmentation and image classification
      • Both of them use algorithms that require specific parameterization
      • to (semi‐)automatically perform specific processing tasks
    • Segmentation
      • provide units (image objects), which can be labelled, assigned to classes or used for further class modelling
      • By grouping neighboring pixels to larger wholes , according to their similarity, sensu (bottom-up) regionalization
      • Key challenge: to provide image objects which match scale of operation

    Two related pillars...

    • Classification
      • The assignment of objects to classes via labelling or class modelling
      • We differentiate between feature extraction (e.g. single dwellings or trees) and wall-to-wall classification.
      • Full representation of a scene content is the actual aim of image understanding
      Feature extraction (dwellings, left) vs. wall-to-wall classification (right)

    Segmentation/ Classification

    • A cyclic process ...

    [1.9] (Very) High Resolution Data

    (Very) High Resolution Data

    • Very high spatial resolution (VH(S)R) data today: sub-meter, up to 50 cm.
    • Higher spatial resolution usually accompanied by higher radiometric (and often higher spectral) resolution
    • Increasing resolution means increasing complexity
    • Experience from interpreters as important as in air-photo interpretation, but to be incorporated into rules and algorithms

    (Very) High Resolution Data

    (Very) High Resolution Data

    This scene shows the core of the inner city of Salzburg with the cathedral and the Salzach river. Sensor: Worldview-II , Date: 31 Aug 2010 , Ground resolution: 0.5 meters, 8 bands, 11-bit, Credits: © Z_GIS / DigitalGlobe

    (Very) High Resolution Data

    Sensor resolution hits the scale of human activity

  • Field of potential applications is increasing
  • More and more detailed object levels can be represented
  • Complexity increases as well
  • (Very) High Resolution Data

    Bush encroachment in a bog area. Sensor: : Aerial photograph , year: 1999 , Ground resolution: 0.1 meters, colour, Credits: © Govt of the Federal State of Salzburg

    (Very) High Resolution Data

    This scene shows Subset of an IDP camp in West Darfur. Sensor: : GeoEye, , date:, Ground resolution: 0.5 meters, pan-chromatic, Credits: © Z_GIS / GeoEye.

    Quotation

    Lang, S. (2008). Object-based image analysis for remote sensing applications: modeling reality – dealing with complexity. In T. Blaschke, S. Lang & G. J. Hay (Eds.), Object-Based Image Analysis - Spatial concepts for knowledge-driven remote sensing applications (pp. 3-28). Berlin: Springer.

    Object-based Image Analysis (OBIA)

    Image interpretation and perception

    Stefan LANG & Dirk TIEDE

    [1] Visual perception [2] Early vision and gestalt [3] Role of experience [4] Pixel-scape and object-scape [5] Visual delineation vs. segmentation

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    [2.1] Visual Perception

    What is / what does OBIA?

    Overview of features of OBIA

    Visual Perception

    script

    Numbers are not objects. When looking at this matrix of values we can only tell differences or similarities, but can hardly derive any other meaningful information.

    Visual Perception

    script

    When the matrix is visualized using a unique-value color scheme, it does not get much better. Only homogenous areas or regular patterns become visible.

    Visual Perception

    script

    But when the numbers are visualized in a logical sequence (grey scale), a meaningful picture is revealed.

    (If you won’t realize this person, simply move some steps back, remove your glasses or twinkle your eyes …)

    Image context

    Zebra Sample Stripe

    contrast black/white and shape: "black line in white area"

    Arrow
    Zebra Sample Stripe

    contrast b/w and shape (elongated and acute): "stripes resembling a zebra pattern"

    Arrow
    Zebra Sample Couch

    mainly shape (pattern suppressed): "chair with zebra pattern "

    Early Vision

    • Conceptual framework of Marr (1982)

    • Three-leveled structure of visual information processing

      • Computational level (purpose and strategy of perception)

      • Algorithmic level (implementation)

      • Hardware level (physical realization)

    David Marr Book Front
    Geohumanitarian Action Example

    Marr's Paradigm

    • Computational level

      • Visual processing in stages (“sketches”)

      • Provide more detailed information

      • 2D representation of a scene

        • Raw primary sketch:
          grey shades and colour tones

        • Full primary sketch:
          blobs and edges, ‘place tokens’

    • Algorithmic level

      • Scale-space analysis

      • multi-scale image segmentation

      • class modelling etc.

    Infrared Image

    Recognizing objects

    strawberries

    What makes a 1-year old recognize various instances of a strawberry?

    Optical illusion

    What makes us recognize objects where there are none …?

    Recognizing objects

    Image of figure from distance
    Artwork by German sculptor Stephan Balkenhol. Photos: S Lang
    Image of figure closer

    We perceive a shoulder ... despite the fact there is no distinct boundary!

    🡇

    Image of figure zoomed in

    Expert 'filter'

    Delineation of objects

    Objects and gestalt

    • Gestalt approach (Wertheimer, Koffka, etc.)

    • Ehrenfels criterion: a gestalt is more than the mere sum of its parts (emergent properties)

      • ‘Laws’ of perceptual organization

        • Factor of good gestalt

        • Factor of proximity (granularity)

        • Factor of good continuation

      • Those laws do not explain how perception actually works, but help to predict, how structures are perceived

    Optical Illusion 1 Optical Illusion 2

    Orchard problem

    Orchard Problem 1
    Orchard Problem 2
    Orchard Problem 3

    Role of experience

    Classified picture with a river

      A river has

    • spectral properties: 'blue' (as water)

    • in addition specific form / shape: elongated, quasi 'linear'

    Classified picture with a river with highlighted features

      A municipal park has

    • spectral properties: ‘green’ (as vegetation)

    • in addition specific spatial context: surrounded by urban settlement

    From Definiens, 2004

    Quotation

    Campbell - Introduction to Remote Sensing
    Campbell, J. B. (2002). Introduction to Remote Sensing New York: The Guilford Press.

    Role of experience

    • Human vision is well adapted for complex image interpretation tasks

    • Experience built up since early childhood

    • But human vision is challenged when dealing with remote sensing imagery:

      • Applying an overhead view

      • Dealing with spectral characteristics beyond the visual spectrum

      • Working with unfamiliar scales and resolutions

    🢂 Experience is an important prerequisite for skillful and successful interpretation

    Different views of the castle in Salzburg, Austria

    Quickbird picture of the castle in Salzburg, bands 3,2,1

    Quickbird: bands 3,2,1

    Quickbird picture of the castle in Salzburg, bands 4,3,2

    Quickbird: bands 4,3,2

    Aster picture of the castle in Salzburg, green, red, infrared

    Aster: green, red, infrared

    Color photo of the castle in Salzburg

    Color photo

    Role of experience

    • Personal experience

      • depending on age

      • etc.

    Picture of a bottle

    Small kids see dolphins in a bottle

    Unknown source

    Role of experience

    • Professional experience

    • e.g. spectral signature

      • We know spectral profile of geographic features

      • We know what different colours mean, including ‘false colours’ (IR …)

    Spectral Signature of Vegetation

    Role of experience

    • Spatial continua

      • Pseudo-continuous representation of geographical phenomena which occur continuously

      • 'analysis extent': small subset of entire space

      • Discretisation

      • Bubble with associations

        "Think of temperature. You can't go someplace where there isn't one."

    Quantisation

    • Def.: Mapping the continuous brightness range in discrete grey values by scanning in specific raster width

    • Quantisation

    Quantisation by four grey values (2 bit per pixel: 0, 1, 2, 3)

    [2.4] Pixel-scape & object-scape

    Pixel-scape

    • Pixel

      • 'picture element'

      • integrated signal - depending on GSD

      • treated individually, no matter where located

    Raster image
    Visualization of the raster image in different bands
    Plot of different band values

    Pixel-scape

    • Pixel-based classification process

    Raw image

    Raw Image

    🢂

    Feature space

    Feature Space

    🢂

    Classified image

    Classified image

    Source: Jensen, 2005

    • Problems

      • Spectral values belong to more than one information class

      • No spatial relationship used in classification

      • Pixel artificial spatial unit

      • 'Artifacts' (salt-and-pepper effect)

    Pixel-scape

    • Limitations of pixel-based analysis

    • considering

      • Color (spectral reflectance in n Bands

      • 'Space' in the sense of texture (given environment, e.g. 3*3 pixels)

    • but not ...

      • Form & shape

      • Neighborhood & context

      • Hierarchical levels

    Limitations of pixel-based analysis

    Object-scape

    Manual delineation

    Manual delineation (‚Bog‘)

    Pixel-based classification

    Pixel-based classification

    🡿

    Object-based classification

    Object-based classification

    🡾

    Object-scape

    • Relation between target objects and spatial resolution

      • Increasing importance of VHR EO data

      • High level of detail provides extended set of target classes

      • Addressing these target classes in a Landsat-imagery would fail

    • Quotation

      • Hay et al. 2003

    Relation between target object and spatial resolution
    Relation between target object and spatial resolution

    Object-scape

    • Integration of Remote Sensing and GIS

      • GIS users are 'used to' polygon environment

      • Aggregation of information (highly textured images like VHR data or radar)

      • Modelling of scale-specific ecological processes through multi-scale representation

    • Quotation

      • Blaschke & Strobl 2001

    Segmented image
    Object of Interest within image

    Object-scape

    • Meaningful objects

    • Improved reliability of statistics

      • Various measurements (pixels) per object

      • Crisp boundaries

    • Object

      Exemplary object

      Diagram
    • Augmented, uncorrelated feature space

      • Texture within objects, shape, neighbors, hierarchy

    [2.5] Visual delineation vs. machine-based segmentation

    Visual interpretation vs. automated delineation

    • Providing homogenous units

      • Visual delineation: scale-specific, inherently applying generalization

      • Segmentation can be optimized by scale definition (scale-adaptive vs. scale specific) and complexity of boundary

    • Manual delineation

      Automated delineation

      Habitat delineation in an Alpine area. Left: visual interpretation, scape specific on FCIR air-photo, 25 cm GSD. Right: automated delineation by segmentation, scale-adapted on Quickbird imagery pansharpened 0.6 GSD)

    Visual interpretation vs. automated delineation

    • Problems occurring with visual delineation visually (may be solved via segmentation):

      • selection of appropriate generalization level

      • individual delineations

      • placement of boundaries when there is a gradual transition

    • Problems that challenge machine-based segmentation:

      • Delineating conceptual boundaries (e.g. the 'outline' of an orchard, see above)

    • Quotation

      • Campbell 2002

    Different possibilities for delineating an object

    Several possibilities for the delineation of 'Forest'

    Visual interpretation vs. automated delineation

    GeoEye - Segmented picture

    GeoEye
    (50 cm)

    Small subset of a refugee camp

    Image segmentation based on reflection values and optimizing shape

    Visual interpretation vs. automated delineation

    Geohum Original picture
    Geohum picture with unidentified object
    Geohum picture with unidentified object
    Geohum picture with comments on objects
    Geohum picture with unidentified object

    Visual interpretation vs. automated delineation

    Geohum picture with delineated objects

    Visual interpretation vs. automated delineation

    • Visual interpretation

      • Characteristics

        • Texture, colour and form interpreted in a combined manner

        • Several scale domains perceived simultaneously, detail or noise is generalized

        • Number of distinguishable classes depends on experience

      • Strengths

        • Units aggregated with ease

        • Interpretation result can be improved by learning and experience

      • Limitations

        • Delineating of a large set of small units is a tedious job to do

        • Can hardly be automated

        • Processing speed increases through experience but remains linear

    Visual interpretation vs. automated delineation

    • Automated delineation

      • Characteristics

        • The specifics of colour, texture, and form need to be captured in rules

        • Complex spatial aggregates need to be modelled

        • Hierarchical relations between scales can be utilized, but scales cannot be captured simultaneously

        • Scale adaptive vs. scale specific delineation

      • Strengths

        • Automated, optimized, transferable

        • Large number of small, homogenous units can be delineated with ease

      • Limitations

        • Only objects that are delineated can be used later

        • Orchard problem: boundaries are conceptual only and not directly detectable

    Object-based Image Analysis (OBIA)

    Basic Concepts of Hierarchy Theory

    Stefan LANG & Dirk TIEDE

    [1] Multi-scale representation [2] Scale [3] Hierarchy theory [4] Hierarchical patch dynamics [5] Multi-scale imagery [6] Scale space analysis.

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    [3.1] Multi-scale representation

    Multi-scale representation

    • Assumptions
      • World in its complexity is hierarchically structured
      • This complexity can be decomposed and structured in scaled representations
      • Objects are interlinked on different scales (spatial concepts required)
      • Multi‐scale segmentation mimics our way of perception

    Multi-scale representation

  • Human eye
    • Represents reality in various levels simultaneously
    • We may need to have ‘a closer look’, but the crucial information is conveyed instantly

    We perceive several scales simultaneously, from single trees over forest stands and fields to forests and agricultural land

    From: Lang 2002, Photo: T.Blaschke

    Multi-scale representation

    • Human eye
      • Represents reality in various levels simultaneously
    • Image segmentation
      • Representation in various levels sequentially

    Scale‐adaptive multi‐scale representation (boundaries congruent but notscale‐specific

    Multi-scale representation

    • Human eye
      • Represents reality in various levels simultaneously
    • Image segmentation
      • Representation in various levels sequentially
      • Links between between levels via object hierarchy ( scale‐adaptive or scale‐specific )
      • How to hit the appropriate scale domain?
    From Lang et al. 2004

    Object hierachy from 'pixel level'to 'level 3'

    [3.2] Scale

    Scales

    • Two inherent scale domains
      • The entire face (including the recognition of a person)
      • The constituting parts (eyes , forehead, nose, beard, collar)
      • But not: single pixels (that’s why we twinkle our eyes)

    Scales

    • Scale
      • Def: “the period of time or space over which signals are integrated [...] to give message ” (Allen & Starr, 1982)
      • Here: Refers to the size of objects in reality or in a representation of it (e.g. a map or a satellite image)
    • Different objects have different scales
      • Every object has an inherent scale
      • It appears in a certain range scale
    • Depending on the elevation of our viewpoint we see certain objects

    Different view - different scales - different objects ...

    Grain and Extent

    • Relevant range of the scale spectrum for landscape analysis
    • Grain
      • minimum area at which an organism responds comparable to resolution (spatial, spectral, temporal) in an image
    • Extent
      • coarsest scale of spatial heterogeneity extent of the whole scene (total area, bandwidths, covered temporal duration)

    Same landscape, same extent, different 'grain size'

    [3.3] Hierarchy theory

    Holons

    • Holons
      • Greek holos [whole] and ‐on [part] (proposed by Koestler, 1967)
      • “any stable sub‐whole in a hierarchy” (Haigh, 1987)
      • defined by a set of governing principles =‘canon’
      • determines invariant properties and options of responses to environmental changes (flexible strategies)

    "[...] self-creating, open sytsem, governed by a set of laws which regulate its coherence, stability, structure and functioning" (HAIGH,1987)

    [3.4] Hierarchical Patch Dynamics

    Decomposability & holarchy

    • Landscape as a system
      • Consisting of interacting subsystems
      • Hierarchical Patch Dynamics Paradigm (HPDP) , Wu 1999
    • Decomposition
      • Separating a system into its components according to their scale and ordering them hierarchically

    Decomposing

    Hierarchical organization

    Quotation

    Wu & Louks 1995

    and Simon 1962,1973

    Decomposability & holarchy

    • Scaling ladder
      • Every portion of land contains objects of many different scales resulting in a series of scales
    • Boundaries within the scale spectrum
      • Thresholds between ranges of scale are never crisp

    Decomposability & holarchy

    • Subsystems
      • Horizontal and vertical coupling
    • Self‐assertive tendencies
      • rather independent from each other
    • Integrative tendencies
      • Part‐being of constituting elements

    Multiple scales in images

    • Definition of fundamental objects in remote sensing images
      • Integrated objects vs. aggregates objects
    • Interaction of objects within and across scale domains
      • What scales should be chosen for the different objects?
      • At what scale should hierarchies be established?
      • Is a pan‐hierarchical representation sufficient or do we need regionalized or class‐specific segmentation?

    From: Lang,2002

    [3.6] Scale- space analysis

    Scale-space analysis

    • Linear Scale Space Analysis
      • Repetitive smoothing of an image applying a Gauss filter
      • Image objects emerge and disappear in a dynamic process
    • Multiscale representation
      • Stacking of multiple filter layers

    Scale- space analysis

    • Blob detection
      • Dynamc image objects
      • 4 dimensions: x & y, reflection, scale
      • Visualisation as 2D‐blobs (a) vs. 3D Hyper blob (b)
    • Scale-Space Events
      • C: Creation
      • A: Annihilation
      • S,M: Split and Merge
    • Application
      • E.g. tree crown detection

    From: Hay et al., 2003

    Object-based Image Analysis (OBIA)

    Knowledge Representation

    Stefan LANG & Dirk TIEDE

    [1] Knowledge representation - why? [2] Experience and learning [3] Production systems [4] Fuzzy sets [5] Image understanding [6] Object categories

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    [4.1] Knowledge Representation - Why?

    Why?

    • An intuitive, yet experience‐ and culture‐driven concept of our geographical reality.
    • While single elements are perceived, likewise the entire image is understood.
    • Once understanding the whole scene we assign meaning, maybe even emotion, to it.
    • To achieve such level in image analysis (or at least an approximation), we need to make implicit knowledge explicit!

    Knowledge Representation

    • Knowledge plays a key role in the interpretation‐oriented parts of the remote sensing process chain (Campbell 2001)
    • Colour form, and arrangement evolve certain parts of our experience and knowledge
    • Object hypotheses tested and verified against what we see
    • Matching to stored information

    [4.2] Experience and Learning

    Experience and Learning

    • Training feeds knowledge through formalized learning
    • Artificial intelligence (AI)
      • Procedural knowledge vs. structural knowledge
      • Knowledge organizing systems (KOS), formal concept analysis (FCA)
      • Semantic nets
    • Artificial neural networks
      • Neuron‐like machines
      • Adaptive vector coding with tuned weights weights and synapses
      • But otherwise black box system and limited transparency

    Experience and Learning

    • Natural computing (Binning et al. 2002)
      • Parallel nature of natural thinking and computing
      • Expert systems with net‐like character
      • Self‐organizing, semantic, and self‐similar network (Triple‐S)
      • Fractal machine with excitements and thresholds (activation)
      • Inheritance concept
      • Unit delineation and labelling in cyclic manner

    http://www.fractal.org

    [4.3] Production Systems

    Rule-based production system

    • Rule‐based approach
      • Logical inference (if … then)
      • Knowledge needs to be encoded in rules
      • Transparency of intelligent system
    • Challenges
      • How to encode implicit knowledge into rules?
      • How we guarantee guarantee integrity of such rule‐sets?
    • Procedural and semantic dimension
    • Considering vagueness in class assignment ➞ fuzzy approach

    [4.4] Fuzzy Rule Sets

    Fuzzy rule sets

    • We would all consider freshly brewed tea as something “hot”
    • Eventually the tea gets cold … just when exactly and at which temperature?
    • We do not know precisely, but:
      • We all have a concept of “warm”, “cold”
      • We simplify a gradual phenomenon and categorize it
      • Boundaries between these concepts are not crisp
      • The category becomes clearer with increasing distance from boundaries

    www.screenplay-contest.com

    Fuzzy rule sets

    • Vagueness in class description
      • Imprecise human thinking and vague (linguistic) class descriptions
      • Uncertainty in sensor measurements
      • Class mixtures due to limited (spatial and spectral) resolution
    • Potential Potential of fuzzy rule sets
      • Express each object’s membership to more than one class
      • Probability of class assignments

    Source: unkown

    Fuzzy Rule Sets

    • The fuzzy set (A):
      • Is a certain subset of values of the whole range of an object feature X (e.g. NIR‐band)
      • Represents an object feature class (e.g. forest) within one object feature
    • Replace boolean logic (“false“ and “true“) of the membership value μ by acontinuous continuous range of [0, …, 1]
    • Define membership function μ(x)
      • Assigning to every object feature value x a membership value μ
      • If μ > 0, then x belongs to the fuzzy set A
      • Relation between object feature and classification
      • ⇒ Choice and parameterisation of the membership function influence the quality of the classification

        ⇒ Introducing expert knowledge

    Fuzzy rule sets

    • Fuzzy rule “if – then” for assigning an object to a class
      • If feature value x (of the object) is member of the fuzzy set (e.g. associated with the class forest), the image object is a member of the land-cover forest
    • Combination of fuzzy sets to create advanced fuzzy rules
      • Operator “AND” – Minimum operation
      • Operator “OR” – Maximum operation
      • Operator “NOT” – inversion of a fuzzy value: returns 1-fuzzy value
    • Fuzzy rule‐base (combination of the fuzzy rules of all classes) delivers a fuzzy classification
      • Every object has a tuple of return values assigned to it with the degrees of membership to each class/degrees of class assignment
      • Since these values are possibilities to belong to a class, they don’t have to add up to 1 (unlike probabilities)

    Fuzzy rule sets

    • Comparison of membership degrees
      • Reliability of class assignment
        • The higher the degree of the most possible class, the more reliable is the assignment
      • Stability of classification
        • Stable classification for differences between highest membbership value and other values
      • Equal membership degrees
        • High values – reliability for both classes: classes cannot be distinguished distinguished with the provided provided classification
        • Low values – unreliable classification (use threshold of a least required membership degree to ensure quality of classification)
    • Defuzzification
      • Maximum membership degree of fuzzy classification used as crisp classification

    [4.5] Image Understanding

    Image understanding

      a. Definition

      Image understanding UI is a process leading to the description of the image content (= reconstruction of an image scene) (Prinz, 1994)

      b. Extent of UI

      Reaching from signals (image data) to a symbolic representation of the scene content

      c. Conditions for IU

        Outcome depends on the domain of interest of the interpeter, defined by:
      • Underlying research question
      • Specific field of application
      • A priori knowledge and experience of the interpreter
      d. Output description
      • Description of real-world objects and their relationship in the specific scene
      • Resulting in thoroughly described features (not mere listing and labelling of features)
      e. Knowledge input
        Process is driven by
      • Utilisation of procedural knowledge
      • Transformation of structural knowledge
      f. Involved disciplines
      • Image preprocessing
      • Pattern recognition
      • Artificial intelligence

    IU and OBIA

    IU and OBIA

    IU and OBIA

    IU and OBIA

    [4.6] Object categories

    Object categories

    • Lang, S., F. Albrecht, S. Kienberger & D.Tiede (2010): Object validity for operational tasks in a policy context. Journal for Spatial Science, 55(1),9-22.
    • Implications of OBIA image understanding and basic objct ontology in general

    Object categories

    • Bona Fide objects
    • With 'natural' boundaries
    • Correspond to local physical discontinuities
    • Perceived by people in more or less the same way
    • Composite Objects
    • Real-world modelled objects
    • Correspond to functional homogeneity
    • Perceived by experts in similar way (convention)
    • (Conceptual) Fiat objects
    • With 'un-natural' boundaries
    • Correspond to no genuine heterogeneity
    • Objects not obvious in the landscape

    cf. Lang et al. 2010

    Object categories

    From: Lang et al. 2010

    Object-based Image Analysis (OBIA)

    Image Segmentation

    Stefan LANG & Dirk TIEDE

    [1] Brief history [2] What is segmentation? [3] Image segmentation in remote sensing [4] Groups of segmentation [5] Multi-scale segmentation [6] Object features [7] Adaptive Parcel-Based Segmentation

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    [5.1] Brief History

    Brief history

    • 1980s
      • First milestone: Haralick & Shapiro 1985 “Image segmentation techniques"
      • Major driver: industrial computer vision
    • 1990s
      • Discovery of image segmentation in remote sensing
      • Challenge: multi‐spectral, geo‐referenced data
    • 2000s+
      • Advent of VHSR data boosted development
      • Key role in semi‐automated analysis of EO data (e.g., Copernicus information layers)

    Nature performs segmentation as well. Since when, we don't know ...

    Source: unkown

    [5.2] What is segmentation?

    What is segmentation?

    • Segments
      • Portions of a linear object (such as a road)
      • Pieces of a wooden snake, limbs of a chain, etc.
    • Segmentation of linear geographical features
      • Road segments
      • 1‐dimensional discrete georeferencing (milestone, km‐post)
      • ‘Dynamic segmentation’ (according to address data)

    from: wds-internetwerbung.com

    Wooden snakes have segments and so have roads. Segments from exit to exit have irregular lengths, segments from milestone to milestone regular ones.

    What is segmentation ?

    • But let's move to 2-D (or 3-D) geographical space
    • Principles of 2D (areas) segmentation
      • Space is cut into pieces
      • Pieces should be of similar size
      • Entire space covered
      • No gaps or overlaps
    • Perfect 2D 'segmentation'
      • Hierarchical systems of administrative units (states, provinces, districts, ,etc.)
      • Number of units >>1
    • Top-down view

    Hierarchical segmentation of a territory into administrative units: 1 state (Austria), 9 provinces(Bundesländer), 127 districts (Bezirke) 2379 communities(Gemeinden, not shown)

    What is segmentation ?

    • Regionalisation (bottom-up)
    • Think of a...
      • Re‐organization of Europe towards the …
      • United Regions of Europe (URE)
      • Regions should be quite distinct to each other, yet internally internally homogenous
      • Politicians decide to take ‘natural language’ as the criterion to group neighbouring communities
      • Some pragmatic decisions to be taken
      • ... better stop here :)

      Fictitious example of the ‘United regions of Europe'(illustration usin NUTS-1 level units)

    What is segmentation?

    • Another example for 'spontaneous' (bottom-up) regionalization
    • Students practicing a fire alarm
      • When arriving in the gym, clusters of students are built
      • Students cluster according to their class
      • Chance is very high to meet the majority of students at their class cluster, even if some of them move around
      • The class cluster will gather around one place, and should not occupy several spaces

    After a fire-alarm pupils reconvene in the gym...

    Image segmentation in RS

    • Digital image data are arrays of pixels
    • Each pixel has as n-tupel of values,
    • Here: Pixel P1 and P2 for two bands (A,B)
    • But where are the regions?

    By Lang,2004

    Image segmentation in RS

    • Objects against background (matrix) represented on image
    • Region (token): neighbouring pixels with similar value

    Image segmentation in RS

    Image segmentation in RS

    • Image segmentation is a form of (dual) regionalisation
      • Regionalisation within the feature space and (at the same time)
      • Within real space
    • Level of detail of objects to be resolved depends on pixel size
      • 3 to 4 times smaller than objects of interest
      • Depending (again) on experience, contextual information, etc.

      2-dimensional geographical space

    Image segmentation in RS

    • How well does segmentation do?
      • Good matches, poor matches with real‐world objects
      • What about our own object hypotheses ?

    Region‐based segmentation of a multi‐spectral QuickBird image (0.6 m GSD)

    Image segmentation in RS

    Grassland patch

    Group of trees

    Mixed forest patch

    Visual delineation on 0.25 m FCIR orthophotos (top row) vs automated segmentation on 0.6 m QuickBird image

    From: Weinke & Lang, 2007

    Image segmentation in RS

    Segmentation results are challenged by the ultimate benchmark, our visual perception. Conceptual boundaries are hardly to be detected by segmentation algorithms while perceived by the human eye with ease (orchard problem).

    Quotation

    Lang 2008, p.15

    [5.4] Groups of segmentation

    Groups of segmentation

    • Pixel-based or histogram-based
      • Thresholding techniques
      • Segmentation of feature space
    • Region-based
      • Region growing
      • Split and merge
    • Edge-based
      • Laplace filter, Sobel-operator, representativeness,...
    • Non-image related
      • Non content expecting
      • Tiling image with a honeycomb or chessboard structure

    Finding homogenous objects

    Detecting edges between objects [and background (matrix)]

    Regions defined without information from image

    Source: unkown

    Histogram segmentation

    • Histogram Thresholding
      • Simplest way to accomplish an exhaustive segmentation
      • One‐ or multi ‐modal distribution of grey values, threshold has to be determined
      • Only works in feature space → 'pseudo-segmentation'

    'Segmentation' by thresholding band 1 of a Quickbird image. This does not result in regions per se.

    Edge-based segmentation

    • A segmentation routine when elongated structures (e.g. roads) separate otherwise homogenous areas
    • Edge: clear boundary between homogenousareas, detectable by edge-sensitive algorithms (filter)
    • E.g. Lee-Sigma filter

    The first derivative helps detecting edges

    Edge-based segmentation

    • Edge detection
      • Filtering – smoothing to decrease noise in the image
      • Enhancement Enhancement – revealing revealing local changes changes in intensities
      • Detection – select edge pixels, e.g. by thresholding
      • Closing of gaps / deleting artefacts
      • Combining, extending of lines
    • Linking the edge pixels to form the region boundaries

    Region-based segmentation

    • Region growing routine
      • Seed cells are distributed over image
      • Bottom up (randomly)
      • Top‐down (content expected)
    • Strategy
      • Neighbours (4‐ or 8‐neighbourhood) are included into region if
        • They do not belong to another region yet
        • A homogeneity criterion H applies
        • Two neighbouring regions are unified, if H applies

    Campbell, P.346

    Watershed segmentation

    • Watershed segmentation
      • Grey values as relief
      • Virtual flooding
      • Watersheds separate pools
      • Controlled by markers
    • Split& Merge
      • Top down following quadtree principle
      • Scene is subdivided into n (usually 4) equally sized parts / cells.
      • Cell is maintained as is, if H applies applies, otherwise subdivided again.
      • Repeated until H applies
      • Merging of neighboring quadtree quadtree cells, if homogenous.

    [5.5] Multi-scale segmentation

    Multi-scale segmentation

    • Segmentation in hierarchical scale domains
    • Principles
      • Mimic the human eye
      • Reflect hierarchical structure of (geographical) reality
      • Segmentation layers are interlinked
      • Reproducibility and universality
    • In eCognition
      • region-based, local mutual best fitting appraoch (see below)
        • Colour homogeneity
        • Form homogeneity
      • Strictly hierarchical

    Lang et al., 2003

    Multi-scale segmentation

    • Homogeneity
      • Central concept in region‐based segmentation
      • Regions: maximum internal homogeneity , and
      • Maximum external heterogeneity ('difference')
    • Segmentation ≠ unsupervised classification
      • Involves a priori spatial aspects
      • Employs spatial optimization techniques (e.g. compactness)

    Multi-scale segmentation

    • Recent advances support multi‐resolution segmentation
    • Key characteristics
      • Several levels of object delineations
      • Derived from one single image
      • Multi‐resolution means multi‐scale
    • But which scales.... ?

    2-scale representation of scene with bush encroachment

    Multi-scale segmentation

    • What are relevant segmentation layers?
      • Regionalized hierarchies (Lang,2002)

    Different patterns of increasing object size with incremental multi‐resolution segmentation

    Multi-scale segmentation

    • What are relevant segmentation layers?
      • Statistical estimation of scale parameter
      • ESP Tool by Dragut et al.

    ESP tool taking into account local variance among objects and rate of change between levels

    Multi-scale segmentation

    Multi-scale segmentation

    • SCRM (Size constrained region merging) (G.Castilla)
      • The desired mean size of output polygons (DMS ‐ in hectares)
      • The minimum size required for polygons, or minimum mapping unit (MMU ‐ in hectares)
      • The maximum allowed size (MAS ‐ in hectares)
      • The minimum distance between vertices in the vector layer, or minimum vertex interval (MVI ‐in meters)
    • eCognition
      • Scale Parameter (average size of generated objects)
      • Colour homogeneity (spectral similarity)
      • Shape homogeneity homogeneity (compactness (compactness of form)

    SCRM (yellow lines) vs.eCog

    G. Hay, oral presentation, at ASPRS 2007

    Multi-scale segmentation

    • Bottom up region merging technique
      • Starting with each pixel being a region
      • A pair of regions is merged into one region, each merge having a merging cost (degree of fitting)
      • Objects are merged into bigger objects as long as the cost is below a ‘least degree of fitting’ (scale parameter) = the merge fulfils the homogeneity criterion
      • Starting points for merging distributed with maximum distance
      • Pair wise clustering process considering smallest growth of heterogeneity
    • Establishing segmentation levels on several scales using different scale parameters
    • (e.g. 2nd level based on 1st level: larger scale parameter results in larger objects consisting of the objects of the 1st level)

    Multi-scale segmentation

    • Decision heuristics
      • Finding an adjacent object B for an arbitrary object A for merging them
      • Fitting: when the homogeneity criterion is fulfilled
      • Best fitting: when the homogeneity citerion is fulfilled, and the merge between B and A produces the best degree of fitting compared to the merge between A and any other adjacent object A

    Multi-scale segmentation

    • Decision heuristics (cont.)
      • Local mutually best fitting: find the best fitting object B for the object A, then find the best fitting object C for the object B
      • Confirm that object C is the object A, otherwise take B for A and C for B and repeat the procedure
      • Find best fitting pair of objects in the local vicinity of A following the gradient of homogeneity
      • Global mutually best fitting:merge the pair of objects for which the homogeneity criterion is fulfilled fulfilled best in the whole image
      • Distributed treatment order
        • Use starting points with maximum distance to all other points treated before (treatment order defined over pixels or segments)

    Multi-scale Segmentation

    Definition of the degree of fitting

    • Colour and shape homogeneity are weighted against each other
    • Compactness Compactness and smoothness smoothness make up the shape homogeneity and are weighted against each other

    Two objects are similar when close to each other in feature space

    Compactness: ideal compact form of objects (objects don't become lengthy) Smoothness: boundaries of the edge don't become fringed

    Multi-scale segmentation

    • Segmentation parameters
      • 'Scale parameter'(- relative average size)
      • Compactness and smoothness

    Relation between boundary length l of the object and the perimeter of the bounding box of the object (bounding box: shortest possible boundary length)

    Relation between boundary length l of the object and the square root of the number n of the pixels of the object (square root of n equals the side of a square with n pixels)

    Multi-scale segmentation

    • Domain-specific segmentation
      • Within ‚image domains‘, i.e. broader classes such as forest or open land
      • Independent segmentation parameters control specific segmentation within domains

    From Tiede at el, 2008

    [5.6] Object Features

    Object features -overview

    • layer values
      • mean
      • std-dev
    • geometrical properties
      • size, shape, ...
    • textural properties
      • layer value texture (e.g. mean of sub objects: std-dev)
      • shape texture (e.g. directions of sub objects)
    • hierarchical properties
      • number of higher levels
      • number of super or sub objects
    • relations to classes of...
      • neighbour objects
      • sub objects (relative area of...)
      • super objects
    • membership to...

    [5.7] Adaptive parcel-based segmentation

    Adaptive-parcel based segm.

    • Modified per-parcel approach
    • Based on digital cadastre data and multispectral image data (e.g. SPOT)
    • Depending on internal characteristics, objects are retained, aggregated or split apart
    • Finally: spectrally homogenous elementary units ( as basis for e.g. class modelling process)

    Adaptive-parcel based segm.

    • Three cases
      • (1) boundaries retained: parcel corresponds to one, single homogenous image object, no change or updatae of the geometry required
      • (2) boundaries removed: parcels are merged because of internal homogeneity, change of geometry
      • (3) boundaries introduced: a single parcel is spectrally heterogeneous and split according to spectral behavior, change of geometry (mainly forest)

      From Lang & Tiede 2008

    Object-based Image Analysis (OBIA)

    Object-based Classification

    Stefan LANG & Dirk TIEDE

    [1] Strengths of object-based classification [2] Sample- vs. rule-based classification [3] Fuzzy vs. crisp classification [4] Class hierarchy [5] Class-related features [6] Class modelling

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    [6.1] Strengths of object-based classification

    Object-based classification - why and how?

    • Additional information can be utilized
      • shape, texture, relationship to neighbours
      • 'object-relationship modelling' (Burnett & Blaschke 2003)
    • Classification
      • Based on samples (training objects)
      • Based on rules (advanced object feature database query)
    • 'Click and classify'
      • Direct labeling of well delineated objects
      • Easy re-classification

    Object-based classification - why and how?

    • Overcoming noise through segmentation
      • Especially useful at VHSR data
      • Improved signal/noise ratio
      • Adaptable image resolution
    • Decreasing number of units
      • Lesser units to be classified ( in comparison to number of pixels)
      • But increasing complexity in class description

    Object-based classification - why and how?

    • Recall the Yin-Yang analogy
      • Classification interlinked with segmentation
      • Intermediate pre-classification, before performing next step of segmentation (e.g. domain-specific segmentation)

    [6.2] Sample- vs. rule-based classification

    Sample- vs. rule-based

    • Sample-based classification
      • Define class membership by similarity to selected samples (training objects)
        • Sample has to be representative for its class
        • Use features clearly distinguishing the sampled class from other classes
        • Attention: only a few of potential fetaures ma be distinctive
      • Classifier
        • E.g. nearest neighbour Classifier
        • Object will be assigned to the class whose samples are closest in feature space distance
      • Useful approach, if knowledge about the scene's content is limited

    Sample- vs. rule-based

    • Rule-based classification
      • Heuristics are encoded in a set of rules
      • Define a class by single rule on one feature or by several rules on several features
      • Hierarchical relations of classes
      • Rules can address different kinds of features
        • Object features
        • Class-related features
      • Fuzzy or crisp rule definition
      • Advantages compared to sample-based cassification
        • Incorporation of expert knowledge in the classification
        • Formulation of complex class description
        • Transparency (especially compared to neutral networks)
        • Transferability

    [6.3] Fuzzy vs. crisp rule-based classification

    Fuzzy vs. crisp rule-based classification

    • Sharp boundaries vs. gradual changes
      • Especially under natural conditions
      • Fuzzy rule sets account for this uncertainty

      Formann, 1995

      Clear-cuts in a forest are less ambiguous, both in terms of boundaries and class assignment

    Bog areas have different classes (e.g. open raised bog, heath bog, etc.) but no sharp boundaries. FCIR ortho 1976

    [6.4] Class hierarchy

    Class hierarchy

    • Classes are embedded in a hierarchical heritage scheme
    • (1) Feature-based inheritance
      • child classes inherit all descriptive features from their parent classes
      • not confined to spectral values
    • (2) Semantic inheritance
      • classes can be grouped semantically
      • belong to the same logical parent class
    • Semantic inheritance may turn into feature-based when additional explicit information is provided

    Cognition network

    [6.5] Class-related features

    Class-related features

    • Classifying an object based on the classification of other objects
      • classified label functions as a contextual feature, e.g. a green area is classified as urban park because it is embedded in urban area
    • Classified object acts as context feature for other objects
  • Iterative Process
    • Possibly in-deterministic or even unstable
    • Mutual and circular dependencies should be avoided

    [6.6] Class modelling

    Class modelling

    Aim

    Quantifying bush encroachment for evaluating the degradation status of a sensitive ecological area

    Monitoring requirement

    Degradation of near-natural active raised bog between 1976 and 1999 (Nature protection agencies)

    Conditioned Information

    Irrespective of the different data sources and heterogeneity of classes the decrease of th habitat type under concern was reported

    Class modelling

    • Mire complex wit 'non-a-priori' units
      • Entire bog consists of a complicated pattern of vegetation patches with sub-units if ecological relevance
      • Sub-units form habitats that show a certain degradation stage
      • In manual interpretation: habitats are delineated and aggregated by experienced interpreter (generalization)
      • Difficult to automate, since we deal with 'double complexity'
      • Spatial structure of habitats can be categorized ( structural signatures), but not standardised

    Class modelling

    • Two aspects
      • Minimum percentage of bushed
      • Distribution of bushes

    A bush arrangement described by island-like distribution of bushes

    Quotation

    Lang (2008): Object-based image analysis for remote sensing applications: modeling reality – dealing with complexity

    Class modelling

    • Bona fide objects
    • With 'natural' boundaries
    • Correspond to local physical discontinuities
    • Perceived by people in more or less the same way
    • Composite Objects
    • Real-world modelled objects
    • Correspond to functional homogeneity
    • Perceived by experts in similar way (convention)
    • (Conceptual) Fiat objects
    • With 'un-natural' boundaries
    • Correspond to no genuine heterogeneity
    • Objects not obvious in the landscape

    Lang et al. 2010

    Class modelling

    • Demand profile and policy scope
      • Biotope complexes as basic units for regional planning purpose
      • BIMS (Biotope information and management system)

    Class modelling

    • Establishing rule sets for the description of the structural composition of biotope complexes
    • Consideration of minimum size of biotope compexes (e.g. 4ha, 2ha)
    • Additional hint layers
    • realization in Cognition Network Language (CNL)

    Class modelling

    Class modelling

    • Thematic conditioning (functionally homogenous units, minimum size)
      • 31,698 biotope complexes were delineated for the whole Stuttgart Region
      • Average size: 11.5 ha

    Lang et al. 2007, Tiede et al. 2007

    Object-based Image Analysis (OBIA)

    Accuracy Assessment & Object Validity

    Stefan LANG & Dirk TIEDE

    [1] Definitions [2] Non-site specific accuracy assessment [3] Site specific accuracy assessment [4] Error matrix [5] Object-based accuracy assessment [6] Object fate analysis [7] Object validity

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    [7.1] Definitions

    Definitions

    • Accuracy
      • Degree of correctness of a classification result, agreement of image classification with reality
      • Qantitative quality measure judging the validity of results in scientific investigations, that also ...
      • Describes (technical) usability of generated thematic maps in practical applications.
    • Error
      • Discrepancy between the thematic map and the situation on the ground (Foody, 2002)
    • Accuracy assessment
      • The process of assessing and quantifying the accuracy of digital image classification; Image analysis is not completed until the accuracy of the outcome is assessed (Lillesand & Kiefer 2000)
      • Comparison of pixels or polygons in a remote sensing‐derived classification (map to be evaluated) and independent evaluation data (reference map)

    [7.2] Non-site accuracy assessment

    Non-site specific accuracy assessment

    • Comparing area percentages of classes as occuring in the analysed image and the reference map
    • "Non-site specific" means the actual spatial distribution of classes is not considered

    from Campbell, 2002

    Two maps with high agreement according to non-site specific assessment (A = agriculture, F = forest, W = water).

    [7.3] Site specific accuracy assessment

    Site specific accuracy assessment

    • Agreement of categories in classified image and reference map at specific locations
      • based on site‐by‐site comparison (pixels or objects)
      • assumption: each location is occupied by one (and only one) class
    • Calculation of error matrix as an aggregation of localised errors

    [7.4] Error Matrix

    Error matrix

    • Percentage correct (overall accuracy)
    • Error of omission ⇔ commission
      • Evaluating an error from two different viewpoints
      • Error of omission: correct class not recognised by the classification process (false exclusion)
      • Error of commission:assignment to the wrong class (false inclusion)
    • Producer's accuracy ⇔ Consumer's accuracy
      • Accuracies of individual categories
      • Producer's accuracy: Error in the responsibility of the
        producer, in other words: percentage of pixels correctly
        classified by the producer.
      • Consumer's / User's accuracy: Error in the eyes of the
        consumer, in other words: percentage of pixels that a
        user can expect to be correct.

    Error matrix

    • Kappa coefficient (Khat)
    • Explanation of the formula:
      • Observed agreement = overall accuracy
      • Expected agreement = sum of the products of the consumer’s accuracy (CA) and the producer’s accuracy (PA) of each class

    Error matrix

    • Limitations
      • Error matrix is not a standardised measure
      • Many different variations exist
      • Sampling design and sample size can limit the meaning of the measures
    • Types of error
      • Disagreement of labels or just misalignment?
    • Reliability of reference data
      • 'Ground truth' data might be error-prone as well!
      • Other remote sensing data, when being used as a surrogate for ground reference data, should have a higher spatial resolution.
      • When visual delineation is used as a reference, make sure this process is conducted independently and you are aware of its quality.
    • Spatial distribution of error not represented

    [7.5] Object-based accuracy assessment

    Towards object-based accuracy assessment

    • Semantic assessment of object‐based classification
      • Site specific approach (based on object centroids or whole objects)
      • Separating objects from training process (if so) as an independent test set
      • Optional: small test areas where all occuring objects are checked.
    • Geometric assessment of image objects
      • Checking classified object against visual delineation
      • Quantitative assessment with spatial overlay techniques
    • Challenges of object‐based accuracy assessment
      • No full geometrical fit of reference objects and objects to be evaluated due to different ways of boundary generation.
      • When using fuzzy classification rules the categorical class assignments are not necessarily mutually exclusive
    • Accuracy is a matter of geometric and semantic agreement

    Towards object-based accuracy assessment

    • Comparing an OBIA classification with manual delineation

    [7.6] Object fate analysis (OFA)

    Spatial agreement

    • Quantitative assessment: spatial overlay with tolerance
      • Classified object has the same extent as reference object (“stable object”)
      • Reference object is an aggregate of several classified objects
      • Complex geometry with non-overlapping objects
    • Characterisation of object relationship ("object fate")
      • Good objects
      • Expanding objects
      • Invading objects

    Lang et al 2011

    Object fate analysis

    Lang, 2008; Schöpfer et al. 2008; Albrecht 2008

    Object fate analysis

    Albrecht 2010

    Object loyalty (OL)

    • Similarity measures:
      • Proportion of overlapping area
      • Number of overlapping objects
    • Similarity measures:
      • Proportion of overlapping area
      • Number of overlapping objects
      • ➔ Locating systematic errors (e.g. shift in geolocation etc.)

    [7.7] Object validity

    What means object validity?

    • "Degree of fitness for an operational task in a policy context." (Lang et al. 2010)
    • From crisp boundaries to more complex objects
      • As we leave the domain of crisp, well defined bona fide objects (Smith, 1995), the binary decision between correct and false labeling begins to vanish.
      • Fuzzification of a class assignment relates to the semantic ambiguity but not (necessarily) to the delineation.
      • The domain of fiat objects requires more flexible concepts to address these.
      • We suggest the term validity to address the appropriateness of an object delineation.

    Object validity

    • Automated object generation - consequent, but not valid?
      • Spatial inappropriateness and scale mismatch
      • Different object representations
        • Terrestrial survey (red outline)
        • Visual interpretation (green outline)
        • Automated extraction (yellow outline)
      • Error band (buffer)

    From Albrecht 2008/2010

    Object validity

    ... how to validate?

    Object validity

    • Conditioned information
      • Aggregated class: Mixed arable land (not single fields)
      • Object outlines follow cadastral boundaries
      • Minimum size: ~ 4 hectares

    Object-based Image Analysis (OBIA)

    Outlook: OBIA for Non-Image Data

    Stefan LANG & Dirk TIEDE

    [1] Integrated geons [2] Spatial composite indicators

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    [8.1] Integrated geons

    "A geon is a type of region, semiautomatically delineated with expert knowledge incorporated, scaled and of uniform response to a phenomenon under space-related policy concern." (Lang et al. 2014)

    Complex geospatial phenomena

    Lang 2010, mod.

    [8.2] Spatial composite indicators

    Meta (composite) indicators

    Kienberger 2012, Pernkopf & Lang 2011, Hagenlocher et al. 2012

    Meta (composite) indicators

    Kienberger et al.

    Operationalising vulnerability

    Kienberger 2012, Pernkopf & Lang 2011, Hagenlocher et al. 2012.

    Take-away message

    OBIA is an image analysis technique that builds upon human's object thinking.

    Stefan LANG & Dirk TIEDE

    University of Salzburg | Department of Geoinformatics | (c) 2010-2022

    Object-based Image Analysis (OBIA) An introductory course Stefan LANG & Dirk TIEDE [1] Why spatial image analysis? [2] Regions and image objects [3] Image segmentation [4] Knowledge representation [5] Class modelling University of Salzburg, Department of Geoinformatics, 2010-2023