Using LiDAR to map tree shadows
tl;dr; I can load LiDAR data to simulate tree shadows for any time of year, but the hardware demands and hosting costs may be prohibitive so I’m only sharing a small demo for now
Update: I can now offer LiDAR data for large parts of the planet in 1 square km blocks on shademap.app (Demo video)
Two years ago, I launched shademap.app, and since then, a common question I receive is: “Where are the trees?” It’s a valid inquiry, considering I reside in the Pacific Northwest—a region known for its towering trees that significantly affect the amount of direct sunlight a location receives.
Here are two renderings of the shadows on Bainbridge Island for July 9th at 7:09 AM. Radar clearly misses 90% of the shadows cast because it does not include vegetation. Radar only reflects off the ground (Correction: an HN user pointed out that radar does reflect off surfaces like vegetation. I assumed it did not because SRTM radar dataset is cited as the source of ground level elevation data. Will clarify once I understand more. We’re all learning here.), making objects such as trees and buildings invisible. On the other hand, LiDAR reflects off all objects, creating a much richer model of the earth’s surface.
So why hasn’t ShadeMap included trees from the beginning? It’s because ShadeMap uses elevation data to simulate shadows and the only readily available world-wide elevation data sets come from radar. Radar works at night time and penetrates clouds, so satellites are able to compile this data 24 hours per day from space.
LiDAR, on the other hand, is much more accurate but is collected from airplanes or drones and cannot penetrate fog and clouds. It’s much more time consuming and expensive to collect, leaving each local government to fund its surveying costs. However, I recently discovered that my state of Washington provides an extensive LiDAR dataset that covers large amounts of the state.
I could finally fill in the gaps in my shadow simulation-for my own backyard, at least. But there was one problem. The data format was geared towards traditional GIS software (it’s a GeoTIFF) and not browser friends (like a JPG or PNG). In order to use the data, I would have to take 100’s of gigabytes of floating point, imperial feet, GeoTIFF files and slice them up into small fast-loading image tiles where metric meters are encoded as red, green and blue pixel values.
I bought a 1TB hard drive and started asking ChatGPT questions on how to convert the data. (ChatGPT is a marvelous assistant and has saved me hours of reading documentation and irrelevant Google search results) Once, I started to run the conversion process, I realized that my 16GB of RAM could not load these large data files and I had to rewrite the conversion code to just work with a small region of the map at a time. For the first time in a long time, I’m feeling the need for a more powerful machine…
And it works. Or actually…it’s working right now. I’m attempting to convert just the Seattle metropolitan area and it’s only about half way done after 12 hours. The tiles are over 15GB and growing. The simulations are incredible, but I’m not sure I want to sink money into hosting this data and making it publicly available. It’s a shame but it’s the sound financial decision for now.
However, I can host small portions of this dataset for free so if you’re curious what my long-term vision for ShadeMap is, try this demo
As always, follow me on Twitter for frequent updates on this project or if you want to get in touch.