Well I gave my presentation. It went okay but I am kicking myself because I forgot to show a few cool features I added specifically for the presentation. I added a show/hide feature that makes the empty spaces look like walls until you visit them. It makes it so when doing the manual movement you must react similar to the search algorithms. Oh well I think it went okay though.
Advice for the next person: Actually just stop. I say this because I was trying to add some cool features before the presentation happened. But I would have been so much better off just stopping programming and preparing for my presentation. Instead I broke my code and spent all my presentation prep time trying to fix what I had just broken. I wanted to display what the optimal path was after it was found. In doing so I ended up messing up which stacks I was modifying in my recursive statements and then my searches failed. I figured it would be easy to revert to the code to what I had the previous night. Well the problem was I didn't fully upload my backup to google drive so I didn't have anything to revert to. So here is my second piece of advice. Every time you get a new feature to work back it up and then label it!
Anyway I'm going to upload the build that I used present before I make any changes before my defense. Right now my Iterative deepening depth first search on a target is not working. The problem is fairly simple. In the regular DFS I return to a previous state by restarting, saving the key destination spots that work and then use the DFS to return the whole program to the previous working state. Well running the DFS in this way doesn't give me the same path I took. I need to fix this by adding every robot move to a stack, restart my program then execute everything on the stack to return to the previous working state.
There are also some optimizations that I could do to my program. Setting up custom EventArgs then using those to paint would be much more efficient. Right now execution time is limited by the time to draw to the screen and the time to update the log. If I add those to their own thread the searches could execute much faster. But I need to do some significant work to actually optimize my code because if I were to just put the draw on a thread alone it wouldn't correctly track the robot. The reason it can't track the robot is because I do a lookup to see where the robot currently is as I am drawing but if the search became out of sync with my draw calls the visual would be inaccurate.
This blog post will include the demo folder (including test files) I used in my presentation. This also will have my full project file for the build I presented.
Today is the morning of the presentation. I have a few minor functions to implement before I present. But otherwise things look good. At the moment my code is pretty messy so I'm going to have a lot of documentation to do before I turn in my code. There are a few bugs to work out at the moment so I'll try to fix those the best I can. I don't have the design being exactly what I want yet but at least its all functional.
If someone picks up this project in the future it would be cool to be able to have a way to make a map without having to manually modify a text file. It would also be cool to come up with a way to visualize an informed search. I had no ideas when it came to visualizing the numbers assigned to places then the math involved behind making the next step from each state. The uninformed searches I thought were straightforward in showing exactly what the computer was doing at each state.
I struggled so much with trying to get the first depth first search and simply talking to McVey for a few minutes ended up finding that I was making a shallow copy of my data. I meant to create a copy to have two different arrays going on at the same time, one with the actual state of the robot and target and the other marking visited and unvisited spaces. Anyway with the first depth first search working the rest will be slight modifications of this function. It's exciting that it's almost done. I'll post the working project here.
Well I have been struggling to get the first most simple depth first search working correctly. I met with Dr. McVey to see if she can help find the problem with my algorithm. I wonder whether it would have been better to go in earlier and ask more simple questions or to do what I did and try to figure things out on my own until I get stuck and need to get help with a more complex problem. Regardless if you are reading this because you are working on your capstone make sure you meet more regularly with your professors than I have. Currently it's one week before I present and I asked Dr. McVey for help but she isn't familiar with my code.
Anyway I will include the code I have at this point. I will also write a list of tasks to complete before my presentation to get my program to a satisfactory point. Currently my program has three segments. It is separated into three files: Main.cs, FloorMap.cs and AITestFunctions.cs. Main.cs handles all of the display functions and general user input events. FloorMap.cs handles all the data from the input files, stores the input data and performs the logic for moving blocks. AITestFunctions.cs handles all of the AI logic (currently contains a test function that I used to test getting the three files to interact properly which required using threads) which will store my depth first search algorithms. Currently in the AI logic file I setup the depth first search from the current state of the floor map and then execute a recursive depth first search. There is a problem currently with this that I need to find. After this depth first search works creating the depth first search and iterative depth first search will just be modifications of that function.
Tasks before presentation:
- Depth first search for robot and target blocks
- Iterative deepening depth first search (limited depth first search that incrementally increases the depth limit)
- display final solution sequence
High Priority Optional
- change thread sleep number to a variable that can be changed from main form
- change goal block image to checkered pattern
- indicate visited but unusable spaces in the DFS with an X through the square
- mark visited spaces
- add an event in AITestFunctions.cs to mark the log where the recursive statement reaches a limit or determines path is unusable.
- add "step" button to go through algorithm one step at a time.
- create a way to allow the user to make their own map directly in the program instead of needing to modify a text file.
- Create the informed search algorithm using A* search techniques. This will be difficult because I have no idea how I will show the user what logic the algorithm is using to make the decision of where to move next. (if this is your capstone next year and if I don't find a solution I would be interested to see how you approach this)
After reading chapter 3 of Artificial Intelligence A Modern Approach second edition by Stuart Russel and Peter Norvig I've come across two types of uninformed searches that I would like to start trying to implement. First is a depth first search this doesn't seem like it will find the most optimal solution to the problem but it will help find a solution. In order to implement this I will have to run the depth first search on a target block to find a path to the goal. I am thinking a slight modification of that search can be used to bring the robot to the correct location to push the target. The two data structures I will need to implement a depth first search will be a tree which will push onto a stack the solution that it is currently exploring. Popping off the stack if it concludes that path won't reach the solution. There are limits that can be implemented to make sure that this doesn't end up infinitely looping. I am thinking that if it can't find a solution in at least half of the size of the map then that path won't reach a solution.
The other type of uninformed search is an extension of a depth first search called an iterative deepening depth-first search. This seems promising and seems like it will find an optimal solution. It combines the benefits of limiting space requirements like a depth first search with the benefit of finding an optimal solution like a breadth first search.
Another type of search that seemed interesting was the bi-directional search which in the simplest terms possible would be searching from both the goal location perspective and the target location perspective and seeing if they meet on a path. This seems interesting but I like the way that the depth first search can be the foundation for a promising sounding uninformed search technique.
I want to approach creating the nodes of the tree by only adding them if I can actually push the target in that direction. For example say I have a target block with a wall on it's west side I know I can't move that block east because I cannot position the robot to the block's west side. I will need to also consider if any "empty" spaces that lie to the side of a target are "unreachable" for the robot. If this is the case I also would not want to add a direction to the tree. I will include a little sketch of the examples I am thinking of below.
I have a few ideas of how I could display what the algorithm is doing to the user. Perhaps making a modification to my move functions so that they can show where the algorithm is checking. I think it could look pretty cool depending on how I implement this. I want to meet with Dr. Pankratz sometime this week to clear up some of my ideas. I hope to get something working by the end of the week then I can focus on informed searches. From meeting with Dr. McVey early on in the semester the A* search sounded like an intuitive informed search strategy.
As a note on the pictures below:
Blue = Robot Block
Green = Destination block (where I need to move the target block to)
Red = Target block (the block that needs to be pushed by the robot)
White = empty space, can be used to move around on
Black = Wall, cannot move through walls
Okay so in this picture this will be unsolvable. But I will know that because I cannot move the target (red) block east because my robot block (blue) can never get to the left side of the red block. And once the red block would reach the top of screen it will block the robot from getting around to the left. It will also be impossible to get the target off of the north wall once pushed there.
The picture above shows another example of where I would not want to add the ability to move the robot block to the right even though there is empty space to the left. The empty space directly to the left to the target is unreachable by the robot. Just another special case to consider.
It's been a while since I've updated my blog so let's go through what's completed and what needs to still happen.
- Open files and visually dislplay contents
- Block interaction
- Manual manipulation of blocks
- Ending state reached
- can count number of moves made
- multiple target blocks are handled by program
- multiple goal locations can be used (not sure if I will actually implement this in search though)
I included a demo of what the application does right now with some test files. If you do give it a try the colors have the following meanings:
white - empty space available to move in
black - walls
blue - robot that you can control
red - target blocks that can be moved by the robot
green - goal location to move target blocks.
With this done I've got most of the tasks related to the UI out of the way (minus some polish). At this point I need to complete work on actually getting state-space searches implemented. It should be easy enough to debug what's going on though because every move that is made will update the visual and the log.
I have been thinking about my project but haven't written any code yet. I don't want to start going in the wrong direction early. I looked at the project from last year more in depth and I realized I need to do much more than I initially thought. Here is what I need to do from this point.
- Have a meeting with Dr. Pankratz to determine how many types of searches I should include in my project also see if he has any general advice about approaching the project.
- Research space search algorithms
- Determine data structures
- Determine file storage system
- Create visual representation of this data
- Load floor layouts with static floor layout and start/end locations. Only one target block
- Create functions for block interaction (robot pushes target block)
- Get manual block manipulation created (this will be for testing purposes only)
- Make sure ending state is able to be reached
- Create analytics of the paths taken on a floor layout. This will later determine which search algorithm is optimal.
- Tie in the space searches making them fully autonomous
- add in second target block
- add ability to add more than two target blocks.
- add ability for some barriers to move changing the floor plan
- determine best path after the modification of the floor plan or if no path exists
It seems as though Charlie Popp who had this project in 2016 used a 2D array and only was able to properly implement the best-fit search. I need to look further into why he couldn't get the other search techniques to work properly. The 2D array seems like the most straightforward way to store the floor plan. There are quite a few scenarios where a path may appear to exist but will end in failure due to the restriction of the robot's motion. This seems like it will be one of the biggest challenges to overcome at first.
This week's goals:
I want to get the visual representation of a floor plan working. This means loading in from a file probably storing it in a 2D array.
I have been assigned the state space search project. This project is a continuation of Charlie Popp's project from 2016. A few notes I've heard from Dr. Pankratz and Dr. McVey is that I must get a working visual component to this. I have a few ideas for the visual from some videos I've watched demonstrating how a computer would solve a maze. From what I can remember of that video the whole maze is black and slowly becomes visible as the blocks would become visible as they are visited or tested by the computer. I want to try to get something like this working.
I think it might be a cool element to have the ability to manually try to find an optimal path. It might be an interesting way to get the visual component running well right away and then tie in the state space search element.
When looking through Charlie's blog from 2016 it seems as though he was having problems getting the block to push the other block. I'm not sure exactly how he decided to run his search and how he decided where to push the block. But in my brainstorming stage I think I could run the optimal search on the goal block as one component. Then use the robot block as a tool to make that search happen. For example there are one of 8 places the target block can be pushed from. If there is a wall moving in one direction would become blocked off. Maybe one way to execute this could be to have the search on the target block act as if there is a wall if I can't get the robot block to get to one side of the target block. Almost like building a second set of walls depending on where the robot can reach.
So in my first approach I'm thinking I would run the search algorithm on the target. Then make a component with the moves required of the robot. In order to get an working model for the target I may have to add obstacles to the target's search algorithm to represent where the robot can access. I want to get the visual working quickly and maybe get a working model for moving the robot manually. Then the hard part would come of actually getting the search algorithms to work.
I'm thinking C# is a must for the visuals. Event programming would also allow an easy way to do manual control of the robot. Then basically the search algorithms would take over the manual control. I would then need to tie in the analytics portion.
The most challenging twists will be adding multiple target blocks and then having a moving floor plan. My first thought when approaching a moving floor plan is how will I know to give up? It could be that a path opens after a wall moves after I've already checked it. The first reaction might be to just set a limit on how many times to try to find a path and give up if the limit is reached.
For next week I will be researching state space search algorithms and brush up on C# and see if I can get a visual to use. I will also try to get a more concrete plan together and set deadlines for myself.
I became interested in computer science after building my own computer in high school. I have since made a hobby of this, building about 10 computers for family, friends and myself.