Bayes Filter
I’m enjoying just having one class to worry about right now. Gives me more time to focus and digest.
I just finished a project in which I had to implement a Bayes filter in a virtual robot. I won’t delve into the math here, as exciting as that would be, but the gist is that it’s a way to estimate unknowns using knowns and probability.
The robot has a map of the world but doesn’t know its location in the world. By following movement commands and detecting walls with its sensors, it can start to get an idea for its location. The fun part is that sometimes it’ll randomly ignore movement commands and go a different direction, and its sensor readings are only correct most of the time.
It turns out that as long as you know the probability that a movement command will go as intended and the probability that the sensors are correct, you can make surprisingly accurate educated guesses about your location as you move around, take more readings, and build on previous guesses. It was satisfying to watch my robot’s view of the world take shape as I steered it toward its goal best I could, with different parts of the map lighting up brighter as it gradually became more certain of its location.
Side note
I noticed this was my first time working with Java in a couple years and I was definitely rusty with its ways of doing things. It gradually started coming back though. Looking back, I’m not sure why I liked Java so much when I first started using it. I suspect it has something to do with how I was coming straight from C++.
Anyway, this AI course is more fun than I expected. It deals more with the classic interpretation of AI from before the LLM blowup, but to me it’s just as interesting and relevant. AI doesn’t stop being AI just because of some shiny new tech after all.