In the past 260 years, we have experienced three industrial revolutions. The invention of the steam engine paved way for the 1st revolution. Electricity powered mass producing during the 2nd revolution. Electronics and information technology dominated automated production, thus marking the 3rd. Now, with the rise and mainstream of Internet of Things (IoT) and narrow artificial intelligence including GeoAI, we are entering the 4th revolution. It is probable that we, GIScientists, will struggle to understand and properly use GeoAI. The question is how can we become prepared in using GeoAI as it changes over time? This post is 1st of 21 GeoAI series that’ll help us get started for adaptation.
Rise of the GeoAI Industry
Geospatial Artificial Intelligence (GeoAI) combines applied geographic information systems (GIS) and the tools of artificial intelligence (AI) into one, complex discipline that has existed in all but name for approximately two decades. However, the term, GeoAI, that has emerged to describe the union of these information technologies, is relatively new. Multiple combined drivers have provided the stimulus for the recent emergence of GeoAI as a field that is currently attracting a great deal of attention. These drivers include:
- the enhancement of computing & graphics hardware (i.e. robust CPUs & GPUs, and recently TPUs);
- increased distribution and access to cluster cloud computing (e.g., AWS, Azure, NVIDIA);
- advances in Deep Learning (DL) methods and applications since the early 2010s;
- GIS software enhancement and flexibility (i.e. ArcMap to ArcGIS Pro + ArcGIS Enterprise/ArcGIS Online);
- increased documentation on ML and DL algorithms (e.g., TensorFlow, Pyro);
- the exponential increase of data collection and accessibility (≥ 80% of all data contains geotagged information); and
- cheaper and larger database storage.
With these drivers, the GeoAI industry is projected to grow significantly (~16% compound annual growth rate or $550 billion USD value by 2025) in the near future. Thus, demand will continue to increase for the tools and techniques of GeoAI from all spatial information sectors, including education and research.
The Risk of Applying GeoAI
Like any other technology, the success of the emerging field of GeoAI is driven largely by the knowledge of its users. In other words, a GIS user who lacks conceptual knowledge of Machine Learning (ML) and Deep Learning (DL) concepts and, without even basic understanding of those concepts, applies the associated tools of GeoAI will potentially make errors in implementation and/or interpretation of results. Hence, this unqualified use of complex concepts and methods make GeoAI seem, to many potential users, like a risky black box approach to data analysis.
Take for example, the case of a specialist using GeoAI to identify the safest route with the least likelihood of encountering improvised explosive devices (IEDs) for a humanitarian supply convoy. The supply convoy drivers and vehicle support staff are effectively putting their lives at risk based on the specialist’s analysis and decision-making process. The outcome of this life or death scenario substantially depends on data quality and quantity, combined with the specialist’s application knowledge. Certainly, this is an extreme case, as it is a difficult problem to solve, and the use of AI in the decision calculus doesn’t guarantee 100% accuracy. However, by the same token, it does not render AI as an inappropriate approach to use. In fact, the current AI systems are much more efficient at optimizing specific tasks than humans faced with the same unqualified decision process. The risky part is whether the AI-based analysis machine knows how to generalize the process and interpret results on its own, without human intervention. Hence, without this human element in the overall process, current AI systems still rely heavily on what is termed narrow AI that includes human intervention. In the later parts of this blog series, we’ll examine in detail the fundamental concepts of optimization vs. generalization and the types of AI we can envision in applications of the concept, such as that noted above.
Many people, and especially members of the GIS community, will be drawn conceptually to GeoAI as it has quickly become better understood in technical circles, and some will simply seek out and apply available algorithms without the knowledge necessary for their correct use. Of those who follow this path, relatively few may actually understand what they are doing. Further, within university contexts, GeoAI does not necessarily lend itself well conceptually and technically to widespread teaching in GIS courses. These factors may easily discredit/devalue GeoAI’s potential applications for problem-solving and may eventually lead to a third AI “winter.” This term is analogous to the idea of a nuclear winter, meaning that funding and interest in a technology hibernates for a period of time due to unrealistic expectations and not hitting goals in a timely manner. If this hibernation is long-lived, interest in the concept will wane and it will eventually fall into disuse. We have already experienced two AI winters – one in the early 1970s and the other in the 1990s. In order to avoid a third period of hibernation it is essential for users to understand the strengths, weaknesses and areas for improvement of applied AI use in the GIScience domain. This is one of the main objectives of the blog series described below.
Unquestionably, AI’s current popularity as a concept is increasingly driving client demands, shifting business models of companies, and adding impetus to evolving the GIScience industry. To support this evolution, GeoAI has the potential to become a new teaching and learning standard in an advanced GIS curriculum, and we need to be well prepared for this possibility.
Entering The Expanse of GeoAI
Analogous to the sci-fi series, The Expanse, GeoAI extends prevailing GIS standards and passes into the realm of possibility in the near future. In this context, GeoAI has the potential to alter current geography and geomatics programs, by evolving current teaching practices to include the creation of a GeoAI component in modern GIS curricula. Certainly, this will be no easy feat to accomplish! Academics will be required to extend their knowledge base, so they are well versed in the concept and application of GeoAI, while needing to meet the minimal computational costs for undertaking DL and ML applications. Students will be required at least to have a strong grasp of high school math, especially linear algebra (i.e. dot product), basic calculus, and statistics.
Given these and the above considerations, this blog series is intended to create for higher education Professors and students a benchmark for GeoAI instruction within an undergraduate GIS curriculum. The series covers topics that include the range of AI concepts needed to understand its scope, the required mathematics, and their application in advanced GIScience. While there are a very large number of free resources on the Web to access and learn the concepts, many of these are scattered, often involve steep learning curves, and thus make it difficult to fit the pieces together without informed guidance. Some resources demonstrate implementing GeoAI using GIS applications (i.e. ArcGIS Pro and API for Python), but too often there is not enough explanation of the parameters and workflow processes that must be used for a successful application.
This blog series is not intended to overstep existing resources on the Web, nor does it in any way comprise a complete GeoAI curriculum. Instead, the vision is to compile and present easily understood information to all potential users with a keen interest in understanding and implementing GeoAI workflows. Hence, it presents a set of guidelines and blueprints, which are intended to generate encouragement and engagement. More specifically, the blog series is for those that are current ArcGIS users or plan to use ArcGIS software for ML and DL purposes.
Below is the initial table of contents of the blog series (subject to change as time passes):
Part I: GeoAI 101
- Blog post #1: Gateway to GeoAI Blog Series (current)
- Blog post #2: About GeoAI
- What is GeoAI?
- The history of GeoAI
- Why the hype?
- Description of the long-term vision, without the hype
- Blog post #3: AI, ML, and DL
- What is AI and its types? (Narrow, General, and Super)
- The General difference between AI, ML, and DL
- When to use ML and DL appropriately
- Should GIS users learn/understand ML and DL?
- Why are we going to focus on DL instead of ML?
- The Connection of GeoAI and DL
Part II: Understanding Deep Learning
- Blog post #4: Introduction to Deep Learning
- Introduction to Machine Learning (“Shallow Learning”)
- Types of Machine Learning algorithms
- Fundamental Differences between Machine Learning and Deep Learning
- Brief History of Deep Learning
- Types of Deep Learning Methods
- Introduction to Machine Learning (“Shallow Learning”)
- Blog post #5: How Deep Learning works conceptually?
- Forward Propagation
- Types of Activation Functions
- Loss & Cost functions
- Gradient Descent, Optimizations, and Learning Rate
- Update Weights & Iterate Until Converge
- Blog post #6: How Deep Learning works mathematically?
- What are tensors?
- Types of tensors
- Forward propagation & Activation Functions
- Loss & Cost Functions
- Updated Weights
- Blog post #7: Potential Pitfalls of Deep Learning
- Training, Evaluation, and Testing
- Underfitting vs. Overfitting
- Optimization vs. Generalization
- Information Leaks
- Combating the Pitfalls
- Splitting Data Methods [Evaluation Protocol]
- Vectorization & Normalization
- Regularization (L1, L2, Dropouts)
- Data Augmentation
- Blog post #8: Translating mathematics of DL into Python (Numpy + Keras)
- Why NumPy + Keras?
- Code sample
- Blog post #9: Insights to Conducting DL Properly
- Blog post #10: DL Methods in the GIS Ecosystem
- Image classification
- Object detection
- Semantic segmentation
- Instance segmentation
Part III: Deep Learning I in ArcGIS
- Blog post #11: Modern workflow process of DL in ArcGIS
- ArcGIS Pro + ArcGIS API for Python
- ArcGIS Pro + DL Python Packages
- Keras & TensorFlow
- Fast.ai & PyTorch
- Blog post #12: Computational Requirements & Installation Guidelines [Demo]
- RAM, SSD, CPU, and GPU Req.
- Installation – CPU vs. GPU Approach
- Conda environment & ArcGIS Pro
- Blog post #13: Focusing on Object Detection
- What are Convolutional Neural Networks (CNN)?
- Types of CNN Architectures
- Description of ResNets, Inception, VGG
- Evolution of Object Detection
- Blog post #14: Transfer Learning
- What are pre-trained networks?
- How to select one?
- Freezing & Fine-tuning
- Blog post #15: Single Shot Detector using ArcGIS Pro + ArcGIS API for Python [Demo]
- Blog post #16: Keras + ArcGIS Pro [Demo]
- Blog post #17: Future of Deep Learning II in ArcGIS
- Other DL methods in GIS
- TensorFlow & CNTK
Part IV: Real-Time Deep Learning in ArcGIS
- Blog post #18: ArcGIS Enterprise [GeoEvent, GeoAnalytics, Ops. Dashboard] + DL
- Blog post #19: IoT + ArcGIS
Part V: Future of Deep Learning in GIS
- Blog post #20: What’s the next ‘Expanse’ of DL in the GIS realm?
Part VI: Resource Appendix
- Blog post #21: Recommended Resources to Review & Expand
- Esri Inc. [Sessions, Documentation, Links]
- Academic Papers [GeoAI]
- DL books
- 3rd Party channels
About the AuthorMore Content by Anastassios Dardas