Skip to main content

Gateway to GeoAI Blog Series

In the past 260 years, we have experienced three industrial revolutions. The invention of the steam engine paved way for the 1st revolution. Electricity powered mass producing during the 2nd revolution. Electronics and information technology dominated automated production, thus marking the 3rd. Now, with the rise and mainstream of Internet of Things (IoT) and narrow artificial intelligence including GeoAI, we are entering the 4th revolution. It is probable that we, GIScientists, will struggle to understand and properly use GeoAI. The question is how can we become prepared in using GeoAI as it changes over time? This post is 1st of 21 GeoAI series that’ll help us get started for adaptation.

Rise of the GeoAI Industry

Geospatial Artificial Intelligence (GeoAI) combines applied geographic information systems (GIS) and the tools of artificial intelligence (AI) into one, complex discipline that has existed in all but name for approximately two decades. However, the term, GeoAI, that has emerged to describe the union of these information technologies, is relatively new. Multiple combined drivers have provided the stimulus for the recent emergence of GeoAI as a field that is currently attracting a great deal of attention. These drivers include:

  1. the enhancement of computing & graphics hardware (i.e. robust CPUs & GPUs, and recently TPUs);
  2. increased distribution and access to cluster cloud computing (e.g., AWS, Azure, NVIDIA);
  3. advances in Deep Learning (DL) methods and applications since the early 2010s;
  4. GIS software enhancement and flexibility (i.e. ArcMap to ArcGIS Pro + ArcGIS Enterprise/ArcGIS Online);
  5. increased documentation on ML and DL algorithms (e.g., TensorFlow, Pyro);
  6. the exponential increase of data collection and accessibility (≥ 80% of all data contains geotagged information); and
  7. cheaper and larger database storage.

With these drivers, the GeoAI industry is projected to grow significantly (~16% compound annual growth rate or $550 billion USD value by 2025) in the near future. Thus, demand will continue to increase for the tools and techniques of GeoAI from all spatial information sectors, including education and research.

The Risk of Applying GeoAI

Like any other technology, the success of the emerging field of GeoAI is driven largely by the knowledge of its users. In other words, a GIS user who lacks conceptual knowledge of Machine Learning (ML) and Deep Learning (DL) concepts and, without even basic understanding of those concepts, applies the associated tools of GeoAI will potentially make errors in implementation and/or interpretation of results. Hence, this unqualified use of complex concepts and methods make GeoAI seem, to many potential users, like a risky black box approach to data analysis.

Take for example, the case of a specialist using GeoAI to identify the safest route with the least likelihood of encountering improvised explosive devices (IEDs) for a humanitarian supply convoy. The supply convoy drivers and vehicle support staff are effectively putting their lives at risk based on the specialist’s analysis and decision-making process. The outcome of this life or death scenario substantially depends on data quality and quantity, combined with the specialist’s application knowledge. Certainly, this is an extreme case, as it is a difficult problem to solve, and the use of AI in the decision calculus doesn’t guarantee 100% accuracy. However, by the same token, it does not render AI as an inappropriate approach to use. In fact, the current AI systems are much more efficient at optimizing specific tasks than humans faced with the same unqualified decision process. The risky part is whether the AI-based analysis machine knows how to generalize the process and interpret results on its own, without human intervention. Hence, without this human element in the overall process, current AI systems still rely heavily on what is termed narrow AI that includes human intervention. In the later parts of this blog series, we’ll examine in detail the fundamental concepts of optimization vs. generalization and the types of AI we can envision in applications of the concept, such as that noted above.

Many people, and especially members of the GIS community, will be drawn conceptually to GeoAI as it has quickly become better understood in technical circles, and some will simply seek out and apply available algorithms without the knowledge necessary for their correct use. Of those who follow this path, relatively few may actually understand what they are doing. Further, within university contexts, GeoAI does not necessarily lend itself well conceptually and technically to widespread teaching in GIS courses. These factors may easily discredit/devalue GeoAI’s potential applications for problem-solving and may eventually lead to a third AI “winter.” This term is analogous to the idea of a nuclear winter, meaning that funding and interest in a technology hibernates for a period of time due to unrealistic expectations and not hitting goals in a timely manner. If this hibernation is long-lived, interest in the concept will wane and it will eventually fall into disuse. We have already experienced two AI winters – one in the early 1970s and the other in the 1990s. In order to avoid a third period of hibernation it is essential for users to understand the strengths, weaknesses and areas for improvement of applied AI use in the GIScience domain. This is one of the main objectives of the blog series described below.

Unquestionably, AI’s current popularity as a concept is increasingly driving client demands, shifting business models of companies, and adding impetus to evolving the GIScience industry. To support this evolution, GeoAI has the potential to become a new teaching and learning standard in an advanced GIS curriculum, and we need to be well prepared for this possibility.

Entering The Expanse of GeoAI

Analogous to the sci-fi series, The Expanse, GeoAI extends prevailing GIS standards and passes into the realm of possibility in the near future. In this context, GeoAI has the potential to alter current geography and geomatics programs, by evolving current teaching practices to include the creation of a GeoAI component in modern GIS curricula. Certainly, this will be no easy feat to accomplish! Academics will be required to extend their knowledge base, so they are well versed in the concept and application of GeoAI, while needing to meet the minimal computational costs for undertaking DL and ML applications. Students will be required at least to have a strong grasp of high school math, especially linear algebra (i.e. dot product), basic calculus, and statistics.

Given these and the above considerations, this blog series is intended to create for higher education Professors and students a benchmark for GeoAI instruction within an undergraduate GIS curriculum. The series covers topics that include the range of AI concepts needed to understand its scope, the required mathematics, and their application in advanced GIScience. While there are a very large number of free resources on the Web to access and learn the concepts, many of these are scattered, often involve steep learning curves, and thus make it difficult to fit the pieces together without informed guidance. Some resources demonstrate implementing GeoAI using GIS applications (i.e. ArcGIS Pro and API for Python), but too often there is not enough explanation of the parameters and workflow processes that must be used for a successful application.

This blog series is not intended to overstep existing resources on the Web, nor does it in any way comprise a complete GeoAI curriculum. Instead, the vision is to compile and present easily understood information to all potential users with a keen interest in understanding and implementing GeoAI workflows. Hence, it presents a set of guidelines and blueprints, which are intended to generate encouragement and engagement. More specifically, the blog series is for those that are current ArcGIS users or plan to use ArcGIS software for ML and DL purposes.

Below is the initial table of contents of the blog series (subject to change as time passes):

Part I: GeoAI 101

  1. Blog post #1: Gateway to GeoAI Blog Series (current)
  2. Blog post #2: About GeoAI
    1. What is GeoAI?
    2. The history of GeoAI
    3. Why the hype?
    4. Description of the long-term vision, without the hype
  3. Blog post #3: AI, ML, and DL
    1. What is AI and its types? (Narrow, General, and Super)
    2. The General difference between AI, ML, and DL
    3. When to use ML and DL appropriately
    4. Should GIS users learn/understand ML and DL?
    5. Why are we going to focus on DL instead of ML?
    6. The Connection of GeoAI and DL

Part II: Understanding Deep Learning

  1. Blog post #4: Introduction to Deep Learning
    1. Introduction to Machine Learning (“Shallow Learning”)
      1. Types of Machine Learning algorithms
    2. Fundamental Differences between Machine Learning and Deep Learning
    3. Brief History of Deep Learning
    4. Types of Deep Learning Methods
  2. Blog post #5: How Deep Learning works conceptually?
    1. Forward Propagation
    2. Types of Activation Functions
    3. Loss & Cost functions
    4. Gradient Descent, Optimizations, and Learning Rate
    5. Backpropagation
    6. Update Weights & Iterate Until Converge
  3. Blog post #6: How Deep Learning works mathematically?
    1. What are tensors?
    2. Types of tensors
    3. Forward propagation & Activation Functions
    4. Loss & Cost Functions
    5. Backpropagation
    6. Updated Weights
  4. Blog post #7: Potential Pitfalls of Deep Learning
    1. Training, Evaluation, and Testing
    2. Underfitting vs. Overfitting
    3. Optimization vs. Generalization
    4. Information Leaks
    5. Combating the Pitfalls
      1. Splitting Data Methods [Evaluation Protocol]
      2. Vectorization & Normalization
      3. Regularization (L1, L2, Dropouts)
      4. Data Augmentation
  5. Blog post #8: Translating mathematics of DL into Python (Numpy + Keras)
    1. Why NumPy + Keras?
    2. Code sample
  6. Blog post #9: Insights to Conducting DL Properly
  7. Blog post #10: DL Methods in the GIS Ecosystem
    1. Image classification
    2. Object detection
    3. Semantic segmentation
    4. Instance segmentation

Part III: Deep Learning I in ArcGIS

  1. Blog post #11: Modern workflow process of DL in ArcGIS
    1. ArcGIS Pro + ArcGIS API for Python
    2. ArcGIS Pro + DL Python Packages
      1. Keras & TensorFlow
      2. Fast.ai & PyTorch
      3. Theano
      4. CNTK
  2. Blog post #12: Computational Requirements & Installation Guidelines [Demo]
    1. RAM, SSD, CPU, and GPU Req.
    2. Installation – CPU vs. GPU Approach
      1. NVIDIA
      2. Conda environment & ArcGIS Pro
  3. Blog post #13: Focusing on Object Detection
    1. What are Convolutional Neural Networks (CNN)?
    2. Types of CNN Architectures
    3. Description of ResNets, Inception, VGG
    4. Evolution of Object Detection
  4. Blog post #14: Transfer Learning
    1. What are pre-trained networks?
    2. How to select one?
    3. Freezing & Fine-tuning
  5. Blog post #15: Single Shot Detector using ArcGIS Pro + ArcGIS API for Python [Demo]
  6. Blog post #16: Keras + ArcGIS Pro [Demo]
  7. Blog post #17: Future of Deep Learning II in ArcGIS
    1. Other DL methods in GIS
    2. TensorFlow & CNTK

Part IV: Real-Time Deep Learning in ArcGIS

  1. Blog post #18: ArcGIS Enterprise [GeoEvent, GeoAnalytics, Ops. Dashboard] + DL
  2. Blog post #19: IoT + ArcGIS

Part V: Future of Deep Learning in GIS

  1. Blog post #20: What’s the next ‘Expanse’ of DL in the GIS realm?

Part VI: Resource Appendix

  1. Blog post #21: Recommended Resources to Review & Expand
    1. Esri Inc. [Sessions, Documentation, Links]
    2. Conferences
    3. Academic Papers [GeoAI]
    4. DL books
    5. 3rd Party channels