Skip to main content

Teaching and Learning GIS in the Age of AI (Part 1)

AI tools are transforming what is possible in geospatial work. But powerful tools do not automatically produce good spatial thinking. For a new generation of GIS students and educators, the gap between what these tools can produce and what users truly understand is widening fast. This series explores what it means to teach and learn GIS in an age when the tools keep getting better at simulating the judgment they cannot replace.

In this first part of a three-part series on teaching and learning GIS in the age of AI, I want to reflect on my own journey into the geospatial domain and offer some thoughts on the changing landscape for both students and educators in light of recent technological advances.

I came to GIS through an unusual door. My PhD research centred on the Ebla archives, a collection of cuneiform tablets from Bronze Age Syria that record the political and economic life of an ancient Near Eastern kingdom. These were not maps: they were administrative records that included the names of hundreds of different places, from small villages to large rival kingdoms.

A small handmade clay tablet covered in wedge-shaped cuneiform inscriptions.

Cuneiform tablets like this one often contain valuable information about the administration, economics, politics, and geography of ancient Near Eastern kingdoms like Ebla.

My work used spatial analysis and graph theory to trace how place names clustered across thousands of those records, and look for patterns in political geography that the scribes never made explicit. Which settlements appeared together? How often, in what contexts, under what kinds of leadership? The analysis pointed toward something genuinely interesting: the way place names are organized in the tablets seems to reflect the political status of those settlements in relation to Ebla and to each other, and possibly to the broader trade and political geography of Bronze Age northwest Syria. It was not a proven conclusion, but a pattern the spatial analysis made visible that no amount of close reading alone would have surfaced. It was, at its core, a geospatial problem. It just happened to involve clay tablets rather than shapefiles.

The work was slow and painstaking. Building the relational data by hand, making judgment calls about how to structure geographic connections, deciding which patterns were meaningful and which were artifacts of how the data had been recorded. These were not problems anyone could have solved for me. But the data construction, the pattern detection, the coding? AI tools could have accelerated all of that significantly, and I genuinely wish they had existed when I was a graduate student.

What I have come to appreciate, though, is that I could only have used those tools well because I had already done the hard thinking. I understood the spatial logic of the problem. I knew what a suspicious result looked like. I had developed, through slow and sometimes frustrating work, the kind of judgment that tells you when something is geographically wrong, even when it looks right on screen. Had I had AI tools then, they would have served the analysis, not driven it.

I think about that constantly now, watching a new generation of GIS students arrive with more powerful tools than I ever had, and less time than I had to develop the foundational skills and knowledge that underlie and inform how those tools are used. The capability gap between what they can produce and what they fully understand is wider than it has ever been, and it is widening fast.

The More Powerful the Tool, the More It Demands of You

This is not a new tension. I grew up at a time when the internet was rapidly infiltrating every aspect of life, and I was one of the last in my school to get a home computer. When I finally did, it changed everything, from how I did research to how I communicated and learned. But the internet did not do the research for me. It gave me access to digital libraries, online datasets, and academic journals transformed what was possible, but it could not decide what questions were worth asking, or differentiate a good argument from a weak one. Good research remained the purview of the researcher. The tool changed the landscape; the judgment still had to come from somewhere else. Good research remained the purview of the researcher.

AI in geospatial work is a generational shift of the same order, and we are not going back. Many of my students today have never participated in higher education without AI tools available to them, whether generative tools like large language models or the more traditional algorithmic tools that have long been embedded in GIS platforms. That is simply the world they are entering. The question is not whether to use these tools, but how to use them in ways that strengthen rather than substitute for spatial reasoning.

Geospatial problems make this especially important, because spatial data has properties that generic AI literacy training almost never addresses. Where and how something is measured matters. How a study area is divided matters. What scale the analysis runs at matters. A pattern that looks meaningful at one level of geographic aggregation can dissolve or reverse entirely at another. A model trained on data from one region may perform poorly and fail silently when applied somewhere else. These are not edge cases. They are fundamental properties of spatial data that shape what any analysis can and cannot find.

Consider a student tasked with identifying optimal locations for new transit stops in Halifax, a city where the gap in service between the north end and the peninsula’s more affluent southern neighbourhoods is a documented and ongoing equity concern. If they were to give a generative AI tool a prompt, it would produce a suitability workflow: criteria, weightings, and possibly code. The output will look complete. But has the student asked why those criteria were selected? Whether the weightings reflect Halifax’s specific demographic and geographic context? Whether the underlying datasets capture the communities most underserved by the existing network? Or would they simply accept the AI tool’s output? In this scenario, the AI may be filling the methodological space that domain knowledge should occupy. 

This is the core problem. AI tools can produce spatially sophisticated-looking outputs without any awareness of whether the geographic logic holds. And because those outputs are clean, well-formatted, and confident, the errors are not always visible, especially to someone who has not yet developed the instinct to look for them. Knowing how to use the tools is not the same as knowing what the tools cannot see.

It Depends Which Side of the Desk You’re On

The challenge of building spatial judgment in the age of AI looks different depending on where you sit. For students and learners, there is a risk of reaching for powerful tools before developing the foundation needed to evaluate what they produce. Students are the ones developing domain knowledge and learning which questions matter and which problems are worth solving. That knowledge is not incidental to using AI well in a GIS context; it is the whole point. Without it, there is no reliable way to know whether a result is actually right, or just plausible.

For educators, the challenge runs in a different direction. The question is not only how to teach students to use AI tools responsibly, but also how to design learning experiences that build genuine spatial understanding in an environment where AI can shortcut almost every step of the process. When a student can generate a working spatial model in minutes, how does an educator assess whether they understand it? When code writes itself, how do you teach the logic underneath it?

Both challenges are real, and both deserve more than generic AI literacy advice. This series addresses them in turn. Part 2 focuses on learners, specifically the cluster of skills that matter most for GIS and geospatial students navigating an AI-saturated field, and how to build them deliberately rather than accidentally. Part 3 turns to educators, and to the harder question of what it means to teach spatial thinking when the tools keep getting better at simulating it.

The goal, in both cases, is the same: not to resist AI, but to use it well. The internet did not make good researchers redundant— it made the foundations of good research more important, not less. AI in geospatial work is no different. The tools are extraordinary. What you bring to them still determines what they are worth.

A special thank you to Mohamed Ahmed for co-authoring this blog series. 

About the Author

Steven Edwards is a Visiting Scholar (2025–2026) with Esri Canada’s Education and Research Division and Lead Faculty of Geospatial Data Analytics at the Centre of Geographic Sciences (COGS) at Nova Scotia Community College. He works at the intersection of GIS, machine learning and GeoAI, developing practical workflows, training materials and applied solutions that help bridge academic research and real-world geospatial problem solving. He holds a PhD in Archaeology from the University of Toronto, where he focused on computational approaches to ancient landscapes using spatial modelling and data science techniques. His current work emphasizes scalable GeoAI methods, including predictive modelling, network analysis and visibility analysis within the ArcGIS ecosystem. When he’s not working on geospatial problems, you’ll likely find him exploring new ideas at the edge of GIS and AI—or planning his next long-distance trek.

Profile Photo of Steven Edwards