Skip to main content

GeoAI Series #2: The Birth and Evolution of GeoAI

As many AI enthusiasts know, the seeds of AI began in the 1950s, largely disappeared from the 1970s to 1990s, and resurfaced again in the late 2000s. By the mid-2010s, the technology media began to fixate on AI, and has more latterly added the concept of GeoAI to the resurgent flow of contributions. This 2nd blog of a planned 21 blog posts in the GeoAI series defines GeoAI, and examines why it is gaining prominence by examining how it has evolved over time.

What is GeoAI?

An appropriate definition of GeoAI largely depends on who the audience is. Hence, without getting too specific, GeoAI can be best thought of as a blend of narrow artificial intelligence (AI) and applied spatial science, often referred to as geographic information systems (GIS). Narrow AI is a type of AI that is focused on very specific (or narrow) task. On the other hand, GIS is an applied science within the field of geography that projects the physical world onto digital map layers for various applications. GIS itself is used in a wide range of the fields including business, biology, urban planning, epidemiology, defense, and even dentistry, or basically any domain where problems have an inherent spatial component.

A more specific definition is that GeoAI fuses applied narrow AI methods, data mining, and rapid computations within a GIS platform to produce informative results from using spatial data. GeoAI is a highly interdisciplinary field that links several applied scientific fields including computer science, engineering, statistics and spatial science with geographic theory. Fundamentally, this is what defines GeoAI. Like the field of AI (discussed in the next blog post) that it is spawned from, GeoAI is constantly evolving. So, let’s start by taking a brief look at its history.

A Brief History of GeoAI

While there is no official record of GeoAI’s beginning, we can estimate the first instances of GeoAI by identifying historical moments in GIS and applied statistics. The first spatial prediction method was developed by Danie Krige in 1951 and later modified and applied by Matheron in 1963. This method is called ‘kriging’ and is one of the core techniques used in geostatistics. In the same year (1963), the concept of GIS was developed by Anglo-Canadian Roger Tomlinson. One year later, Howard Fisher at Northwestern University developed the first GIS operational software. With the development of spatial prediction methods and the concept of GIS, we can arguably say that GeoAI has its roots as early as the mid-1960s, which is a mere decade after Alan Turing developed his famous AI Test (also discussed in the next blog post).

With the formative roots of GeoAI established, the next question is “how has GeoAI evolved over time?” To answer this question, we can consider a timeline with four generations of GeoAI developments, defined by changes in seven key dependent drivers, including:

  1. Hardware technologies,
  2. Access and availability of data,
  3. Level of connectivity,
  4. Machine Learning technologies,
  5. Software technologies,
  6. GIS roles, and
  7. Output.

Each of these drivers is now briefly elaborated within each generation.


Image developed by: Anastassios Dardas

1st Generation of GeoAI (mid-1960s to late-1990s): Limited Local Intelligence

Most of the elements that comprised the 1st generation (1G) of GeoAI were local to an individual computer, and not well integrated. Hence the title “limited local intelligence.” This generation was defined by limitations in computational power and the restrictions of data availability and accessibility, which constrained the full use of machine learning algorithms. Additionally, AI ‘winters’, or periods of neglect occurred during the 1970s and 1990s due to the overly inflated promises by earlier developers, and abnormally high expectations from end-users. Hence, this first generation of GeoAI stagnated for three decades.

Hardware Technologies: Relative to the present day, computers from the 1960s to the late 1990s were computationally slow and expensive. Factoring in adjusted inflation, most personal computers cost between $3k and $7k USD. Most individuals could not afford to buy, nor did they see the incentive to own, a computer due to the limitations for everyday use. Computers in this timeframe were mainly for large corporations, government agencies, and well-funded academic projects.

Access, Availability, and Storage of Data: Similar to hardware technologies, data storage took up a lot of physical space, was expensive, limited in capacity, and had to be housed locally. Accessing data remotely was quite limited as computer system networks including the Internet were not well developed. Indeed the Internet was not publicly available until 1991. Remote access to data was usually conducted through peer-to-peer networks, which required direct communication between the host, the client machines, and the user. Spatial data were collected through standard field surveying techniques, satellite and aerial imagery, and blueprints, which had to be manually digitized and georeferenced for GIS applications.

Connectivity: Until 1991 the Internet was only for government agencies and a handful of institutions, hence connectivity was limited outside of these areas of use. In 1995, only 0.4% of the world’s population was connected to the Internet (mostly in the USA) and used slow dial-up technologies.

Machine Learning Technologies: Most of the existing machine learning (ML) techniques were created before 2000. Common ML algorithms used in GIS include Empirical Bayesian Kriging (EBK), Maximum Likelihood Classification (MLC), and Nearest Neighbour. These techniques saw limited use during the first generation of GeoAI, due to insufficient computational power and inadequate GIS software. Moreover, it is debatable whether there is any actual machine ‘learning’ undertaken with the use of these statistical techniques.

Software Technologies: ML analyses were mainly done programmatically. GIS software didn’t become mainstream until the early 1990s. Esri’s first generation software, ArcInfo, was one of the first commercial GIS products, released initially in 1982 and ran on a prime minicomputer with connected graphics terminals. ArcView was the first official window-based desktop software with version 1.0 released by Esri in 1991, leading to much more widespread use of GIS via the Microsoft Windows (and also Unix and Mac 9) OS through to its ultimate version 3.x in 2002.

GIS Roles: The main GIS role in the first generation of GeoAI was the GIS analyst, who processed geospatial data and, if applicable, applied geostatistical techniques.

Output: Output from early GeoAI analyses were standard cartographic maps.

Image developed by: Anastassios Dardas

2nd Generation of GeoAI (2000 to late-2000s): Early Enterprise & Web Era

Compared to 1G, 2G GeoAI is characterized by the widespread adoption of the Internet and personal computers (PCs), improved GIS software, increased availability and accessibility of data, and the use of enterprise geodatabases. Thus, 2G GeoAI was longer lasting than 1G and is defined by the entrance into the Web era and early implementation of enterprise solutions.

Hardware Technologies: Unlike computers from the 1960s to the late 1990s, PCs in the 2000’s became faster, less expensive, and easier to use. The widespread adoption of desktop computers, and to a certain extent laptop was, itself, largely a function of advancements in chip and storage technology, and improved software.

Access, Availability, and Storage of Data: Data storage was starting to become more compact (hard disk drives, and flash drives), less expensive, and gradually began migrating towards the cloud. Even more important, database management system technologies including the UX and UI of enterprise geodatabases continued to flourish. This improved storage, accessibility and collaboration within and between organizations.

Connectivity: The Internet started to become mainstream during this era with 16% of the world’s population connected by 2005. The development of websites and application programming interfaces (APIs) started to surge. Additionally, 3G mobile technologies enabled cell phone devices to connect to the internet.

Machine Learning Technologies: Support Vector Clustering and unsupervised ML methods became some of the most popular ML methods for clustering and classification in ArcGIS. Additionally, Geographically Weighted Regression (GWR) was devised during this time period and later implemented in ArcGIS core software.

Software Technologies: Due to improvements in software and hardware technologies, some of the ML methods were implemented as GUI tools in ArcGIS Desktop. Esri’s third desktop software, ArcMap, had its first release in 1999, eventually replacing ArcView by the early 2000s.

GIS Roles: The GIS analyst remained the standard position for processing geospatial data and performing geostatistics. Additional responsibilities for this role during the 2G period included accessing and publishing content through enterprise geodatabases.

Output: Cartographic maps remained the standard output in GIS.

 


Image developed by: Anastassios Dardas

3rd Generation of GeoAI (2010 – 2019): The “Big” Leap

3G GeoAI emerged based on the amplified use of the Internet and personal network devices (i.e. smartphones and tablets), significant breakthroughs in GIS, ML, storage and computational technologies, and widespread availability and accessibility of data. Parallel to AI, 3G GeoAI has seen exponential growth in use by government agencies, corporations, NGOs, and academics to solve real world spatial problems. Esri Inc. joined with other tech companies (e.g., Google, Amazon, Microsoft, Apple) to invest in AI-related content. These major milestones, accomplished within a short period of time, have shifted the paradigm of GeoAI into a new generation (3G), which we can refer to as the “big” leap in GIS, with “big” here referring to the rise in big data.

Hardware Technologies: The hardware of today would be perceived as largely unimaginable as recently as the 1990s. Smartphones and other personal devices are so powerful that they can perform some narrow AI content (e.g., facial recognition, augmented reality) – up to 5 trillion operations per second. PCs have become significantly smaller, cheaper, and exponentially more powerful than their parents in the 2000s due to hyperthreading technologies, advancements in GPUs, and the invention of solid state drives (SSD). Particularly in CPUs and GPUs, the advancements were all due to the competition of semi-conductor companies in keeping up with Moore’s Law in the past 50 years. Lastly, the use of cloud computing by tech companies (e.g., Google, Microsoft, and Amazon) has become a popular avenue to conduct data analytics.

Access, Availability, and Storage of Data: Data storage has become larger (fitting terabytes), substantially cheaper, and has continued its migration to the cloud (e.g., Azure Blobs, AWS S3). Database management systems have become more robust in security, reliability, and speed due to the exponential demand and daily collection of “big” data. “Big” data as used in this context is a marketing term that refers to the real time collection of vast amounts of data that exceed greatly the capacity of standard PCs. To put things into perspective, the amount of data created on a daily basis in 2018 was 2.5 exabytes (or 50,000 GB/sec)! By contrast, it was 100 GB/day in 1992 and 100 GB/second in 2002. This growth is the result of APIs and smartphone apps exploding, which have become the core business models for many corporations. Crowdsourcing has also become a popular approach for creating data. All of these factors have substantially increased accessibility and availability of data for research and development purposes.

Connectivity: 45% of the world’s population was connected to the Internet in 2015 due to the development and relatively low pricing of smartphones and PCs, and increased coverage and bandwidth of Internet providers. Bandwidth has increased rapidly to meet commercial and residential demand at broadband speeds (> 25 Mbps). Fiber optic cables (100+ Mbps and up to 1 Gigabits per second (Gbps)) have become a high-speed option to residential homes. Additionally, 4G and LTE technologies now allow smartphone devices to stream videos in HD, and perform gaming and video conferencing.

Machine Learning Technologies: In 2009, Professor Fei-Fei Li launched ImageNet, a free database containing more than 14 million labeled images, which is used in computer vision competitions every year. By 2011, GPUs were powerful enough to train on convolution neural networks, thus, sparking the use of deep learning. Many AI experts believe that deep learning will become the backbone technology of AI, especially since it can automatically analyze unstructured “big” data faster and more accurately than traditional ML models. Unstructured “big” data include time series, text, images, and audio data. Tech companies flood the job market with data science, data engineering, and machine learning engineering positions. Additionally, tech companies, including Esri Inc., invest collectively billions of dollars in research and development to develop ML and DL technologies for everyday use.

Software Technologies: ML analyses are undertaken programmatically and continue to be implemented in GIS software. The majority of GIS software development has been towards Web and enterprise-based solutions. Organizations use ArcGIS Online and ArcGIS Enterprise to manage and analyze “big” geospatial data and display interactive Web maps as the output. Both solutions have a suite of ArcGIS apps for specific use cases (e.g., Operations Dashboard, StoryMaps, Tracker, Survey123, ArcGIS Insights). ArcMap continues to be used as the predominant GIS desktop, but ArcGIS Pro is becoming more popular, especially since it has out-of-the-box deep learning tools, the integration of Notebooks to explore spatial data science, and the ability to process “big” data through a direct connection to GeoAnalytics Server.

GIS Roles: On top of processing geospatial data and performing geostatistics, the GIS analyst has responsibility for managing and publishing content on ArcGIS Enterprise/Online and developing Web maps. GIS Analysts also collaborate with data scientists, which is a new role in 3G GeoAI responsible for the more advanced GeoAI applications (i.e. Deep Learning).

Output: Standard cartographic maps continue to display GIS output. However, Web GIS, particularly Web maps and story maps, become more popular channels for communicating outputs. 

Hence, the 3G landscape has evolved considerably from the 1G GeoAI landscape shown at the start of this post. The evolution, however, is not yet complete, and is the present world of GeoAI is encapsulated shown in the figure shown below and discussed next.

Image developed by: Anastassios Dardas

4th Generation of GeoAI (2020 - ?): The Frontier of Intelligence

While 3G GeoAI changed greatly the GIS ecosystem, there are new technologies and applications on the horizon that promise more dramatic advancements in GeoAI. For instance, Internet of Things (IoT) devices and drones are the next frontiers for “big” data collection. Real time data will supplement deep learning and data science approaches and open new use cases and analytics, such as smart cities and semantics. With these components, we are transitioning from an era of collecting and processing “big” data to automatic informed decision-making processes driven by “intelligent” systems with little to no human intervention. This is what would be defined as a more “intelligent” ML platform. Given the emerging nature of these technologies, we can only speculate on how they will shape the GIS ecosystem in the future. However, it is certain that we are transitioning into 4G GeoAI.

Hardware Technologies: PCs are still popular, and CPUs, GPUs, core memory and storage all continue to improve. However, recent trends suggest that virtualization, specifically virtual machines, are becoming the new “PC” for modern day computing. This is due to the flexibility of customization (e.g., operating systems, computational specs) that can be tailored to the user’s needs, the scalability to host “big” data and enterprise solutions with little maintenance required, and the ability to access content remotely (the value of which is demonstrated by the current pandemic).

Although we cannot be sure when it will be commercially available, the most promising advancement for computing in the next decade or so is quantum computing. Quantum computing promises exponential improvements in processing speeds, potentially making even the fastest classical computers obsolete. Currently the applications for quantum computing in GeoAI are unknown, however, the potential will exist to handle data volumes and build more precise ML and DL models that would be too computationally intensive for current computers. This will enhance existing GeoAI applications and open up new ones, such as the development of smart cities, optimization of logistics and supply-chain management, and improved meteorological forecasting.

Access, Availability, and Storage of Data: Data storage continues to expand, get cheaper, and get faster to transfer and download content. Azure, Amazon, Google, and Oracle are some of the largest “big” data cloud storage providers. Drones and IoT are becoming the new “personal devices”, which have the ability to capture and directly transfer real-time data to an analytics platform. With these devices supplementing data collection, it is estimated that by 2025 30% of the data collected will be real-time. Additionally, it is projected that 463 exabytes of data will be created on a daily basis compared to 2.5 exabytes as recently as 2018! The collection of “big” real-time data and quantum computers could present the ability to monitor, analyze, and change everyday activities in near real-time. For instance, cities will want to adopt “smarter” initiatives to decongest traffic by rerouting vehicles in real-time.

Connectivity: The continued growth in smartphone and personal device adoption, and improved Internet coverage have brought 60% of the world’s population online. 4G and LTE technologies have been implemented globally. 5G is on the horizon in developed nations, which promises blazingly fast (10 – 100x times faster than 4G) connections for IoT devices and self-driving cars to communicate with one another.

Machine Learning Technologies: It is unclear what the next ML technology will be, however it is clear that there will be widespread use of generative adversarial networks (GANs), reinforcement learning, and transfer learning. These types of learning algorithms, and how they are applied in GIS, will be discussed later on in this blog series (particularly transfer learning). Other emerging ML technologies are natural language processing (NLP) and semantic AI. Natural language processing enables computers to interpret and analyze human language. This is used to mine text automatically, classify documents, and perform sentiment analysis. The application of NLP to GIS requires the supplementation of existing geospatial data with other information sources such as restaurant reviews, or street safety assessments, which are then analyzed through semantic AI. Semantic AI aims to identify the meaning of data rather than statistics, through knowledge graphs or ontologies. Ontologies enable computers to understand and process human-like queries efficiently (e.g. what is the quickest route at this time?). Applied ontologies in a GIS have the potential to solve the “big” data silo problem, a situation where “big” data are hosted in data repository systems but are not linked (i.e. modeled), creating a lack of synergy, duplication of effort, missed data enrichment opportunities, and potential inefficient geospatial processes. Having ontologies integrated in a GIS platform to link “big” data stores creates the potential to shorten complex geospatial processes, minimize the use of ML, and solve difficult problems that require real-time updates (e.g., what is the most optimal route for emergency vehicles by avoiding construction and traffic). This can be used in any GIS application, particularly those that seek to use real-time data such as smart cities and logistics solutions, which define an intelligent system with minimal use of ML.

Software Technologies: ML analyses are done programmatically and will continue to be implemented in GIS software. The ArcGIS API for Python, specifically the learn module, has robust DL and NLP functions built-in to process GIS content locally and on the Web. ArcGIS Pro is now the premium GIS software that enables users to process and publish their content to ArcGIS Online and Enterprise. ArcGIS Online and Enterprise dominate the Web GeoAI ecosystem for geospatial analytics including “big” data. ArcGIS Notebooks has recently been integrated into ArcGIS Online and Enterprise to enhance the user’s experience in spatial data science, particularly in a Web GIS environment. More specifically, ArcGIS Notebooks comes with ArcPy and the ArcGIS API for Python and a suite of 300 3rd party Python libraries for the user to perform the following tasks at ease: 1) “big” data analysis, 2) develop DL and precise spatial data science models, 3) generate dynamic visualization tools, 4) share workflows securely, 5) transparent cross-team collaboration and, 6) schedule and automate Web GIS administrative processes. Lastly, Analytics for IoT is the newest ArcGIS Online product that captures real-time data from IoT devices that allow GIS analysts to use many GeoAI procedures to explore and gain spatial intelligence from constant streams of big data.

GIS Roles: The role of the GIS analyst maintains the same responsibilities from 3G GeoAI. However, it is likely that in the future there will be increased demand for graduates with skills that equal those of a “GeoAI scientist”. The GeoAI scientist will be expected to have a strong GIS background while, at the same time, performing the duties of a data scientist including the creation of DL and semantic models within the spatial dimension.

Output: Clearly, maps will continue to display GIS output as mapped content is central to all GIS. However, dynamic and interactive Web maps displaying real-time information will become the most effective approach to present insights. Furthermore, we can imagine a GeoAI platform generating Web maps automatically that convey effectively key insights to the GIS Analyst. If this becomes a reality, then we will have achieved true augmented human intelligence through GeoAI. 

Summary

In summary, the central argument of this blog post, is that the simultaneous advancement in seven key drivers has defined the evolution of GeoAI from its roots in the 1950s through three generations that lead towards a fourth future generation of technology. The fundamentals of GeoAI remain the same, but its methods and applications have changed the GIS ecosystem dramatically, especially in the last decade. We are entering into a new GeoAI era (4G), which brings with it exciting opportunities for those of us in the GIS field. The next blog post will further discuss artificial intelligence, machine learning, and deep learning and their implications for the education of GIS users and professionals.

This post was translated to French and can be viewed here.