Well howdy howdy howdy howdy howdy. This is the beginning of the blog/updates section to my capstone experience in my final semester as a Computer Science student at St. Norbert College. As you can tell from the first page, my Capstone project is a way to monitor and report usage of the Computer Science lab by its students. I'd like to keep this section pretty informal if possible as it just seems to make this a bit more of a lighter kind of thing then. Most of what I will be posting in here are weekly updates to my project and maybe some links to websites for research that I find. Hope you enjoy!
So far the tasks that I had put for myself after receiving my project definition on January 23rd-January were as follows:A link to look at the FosCam-Research camera options
-Look at options for reporting the data
-Create an more detailed outline for the project
-Build the website!Well as you can see, I have at least accomplished the creation of the website (very generically) for this week! As far as the other aspects I have made some progress, but not as much as I had hoped. Most of my weekend and early week after receiving my project was spend solely researching what kind of camera to use for this project. Even now, I'm not entirely sure which camera to go with entirely. I have taken the time to meet with my professor, Dr. Pankratz, and got a camera that has been used in previous projects. That camera is the Foscam FI8910W. This camera pans tilts and is a wireless camera that can record and stream live video to your devices. My initial thoughts were to go with the Microsoft Kinect or Kinect 2 (Xbox One version) due to their skeletal tracking firmware, however since both are now discontinued, the prices to get going with those would not be worth it at this time. Although, just today (January 31st) I was talking with a member of the SNC IT team and he pointed me to some software that could possibly give the FosCam the ability to act like a Kinect, but I will have to research that more to confirm.
As far as my other two tasks, I have looked at some options for reporting data, and currently am not entirely sure how I would like to present the data in the end. And I do have an outline started but it is more just a list of thoughts and ideas that I need to put into a more cohesive "to-do" list in order to remain on top of this project. This is definately a different type of programming that I have no real experience in, but I am excited with this project and ready to hit the ground running!
Hey there again. This is the second entry on my Capstone Experience. This second entry is only covering a couple days, so I don't have that much more to report on from this past Thursday but there is some progress! Mainly what I've been working towards these past few days have been getting connected to the Foscam and getting a picture on the screen from it. Unfortunately, I don't yet have that all the way working but I am getting closer. The rest of the time has been spent working on solidifying an outline for my project along with researching some different ways of presenting the reporting data that will be eventually compiled from the Foscam. I was thinking initially, to populate a spreadsheet in Excel. But I have also tried to look at different things like basing the data on a calendar presentation where you would click the date on the calendar and it would compile and report the data from there. Note: I don't currently have any visuals to upload because I'm still working on them, but I will be sure to upload them when I have them solidified.
This next week my tasks include (but are not limited to):-Get my "hello world" with the Foscam
-Continue to develop the motion tracking from there with Foscam -Choose a reporting format
-Get drawn out designs for UI for the reporting application -Update website and make it beautiful.
Hiya! Late night updates coming at you. It is currently 11:29:32 PM on February 4th and I finally had a minor brakethrough. After some time spent this weekend learning that I am completely illiterate when it comes to setting up an internet router. Good news though, as you can see above, those are screen-shots from the Foscam itself showing its own version of "Hello World". I now have access to the camera and some basic abilities that the Foscam software has on its own. Next steps now will be focusing on getting my own code to recognize the camera and make commands, or figure out the way to make commands access the Foscam software that way. From there it will be setting up how to store information when motion is detected and then recording and reporting that data. Sounds so easy when I type it down, but it will be a long road. So goodnight, sleep tight, and thank goodness the Pats didn't win!
Hello there, just wanted to give a bit of an update as to what I have been up to during this week and some items I may have left out of the last post. In my research of the Foscam and the possibility of tracking humans, I have come up with a few thoughts. Such as: Implementing the Microsoft Kinect as a monitoring camera instead since it has skeletal tracking in the camera's firmware. Using a neutral image for a bitmap and compare the differences between the pixels. Look into an open source called Tensor Flow where developers have been able to create Kinect-like skeletal tracking for webcams. Other than this, I have spent a considerable amount of time looking at last year's capstone participants that have projects similar to mine. The two projects I have been looking into have been Ben Talbot's and Carl Peterson who both used the Foscam in their projects last year. Along with that, I have been in personal contact with Ben over the past couple weeks in order to get some more tips and help on getting the Foscam online. Other than that, I've been further developing my outline and developing a PERT chart to show my thoughts and goals for making this project.
Hi there, this post is going to be a bit shorter compared to the first few. This weekend had some traveling that took up a significant amount of time which inhibited too much work and progress to be made. But during the drives I was able to write up some more design concepts and planning as far as UI for the reporting application. I also was able to play around a bit with the Kinect SDK and see that it is possible and supported to use C++ and C# for programming languages. I plan to hit the code hard this week and get you all some more pictures and some talk through with my professors as to which camera to utilize for the project from here on. The benefits of the Foscam so far is that there are student resources and both of my professors have had some experience and can help me with it. Also, The Foscam has the capability of Wifi streaming, which means the feed can be viewed without having a computer connected to it at all times. However, the Kinect has skeletal tracking built within the firmware of the camera, so it will be able to track humans from other motion from the start. Currently, I am unsure if it has the ability to stream a live video feed over wifi. Otherwise, what I've found is that I can connect a computer (camera 1) to the camera to view the feed and then use a remote desktop (on computer 2) connection to view the computer(computer 1) connected to the camera. These options will have to be weighed quickly in order to decide on where to go from here and actually get into the bulk of the project. Thanks for reading!
Moving into another week with the capstone project and it is time that I update you all on some events that have occurred this past week. During the class-times that we held during the week, we participated in mini-poster board sessions. Every person in the class had the opportunity to present their thoughts and design processes for their projects. I found this to be very productive being on both ends of the process. I was glad to receive some important input from both professors and also some of my fellow students. This also allowed me to get a glimpse into most of my other classmate's projects and how they were faring. It seems that most of us are in the coding stages or at least getting around to the coding now after some solid design time. At the end of this post I will post pictures of my board presentation and what I had up there for a more visual project-flow and UI design. Some important take-aways I had from the poster-sessions were: - I need to determine how and what specific data I need to save in order to properly report the lab usage. -I.E. Flatfile, network/database, Server requests - Check into the possibility of multiple users trying to take over camera control at the same time. - Utilizing server requests to get the information from the computer or camera - The possibility of running the program to capture the data on the compsci02 server so we wouldn't need a physical computer in the lab running at all times. - Another possiblity of concern, is compiling and storing data for long periods of time. - I.E. how long to store data, Week, Month, Semester, Year Finally, some updates to the actual process of my capstone project beyond the classroom. This week has been another week of working with getting the FosCam and Kinect's functionalities down in order to finally determine which to go forward with for the rest of the semester. I have been able to get connected with the FosCam and have some minor movements and such with it. And the Kinect software allows for easy connection with the camera itself, but I am looking a little more into the actual programming with it from the interwebs and also some past capstone experiences with the Kinect itself. The FosCam seems to be more in favor currently because there is more recent documentation and usability described in Carl's and Ben's capstones as well as the expertise of Dr. Pankratz. By the end of this week I aim to have a decision on a camera for my project so that I can solely focus on the process of recognizing a human from the camera image. Thank you for reading and I will add more to this experience as it arrives.
Hello again. Over the past couple weeks, in our Capstone Experience we have been talking about the importance of Human Computer interaction as well as some different ethics of software design and how the field is expanding currently. Some of this has been pretty interesting and has given me a chance to really think about and continue to plan out my User Interface a bit more in depth. Other that that, this past couple weeks have been researching how to generate reports in C# as well as looking into a lead that Dr. Pankratz found regarding facial recognition. This way I can count the number of people coming into the lab from the camera feed. There has been some progress but I do need to refine things a bit and ensure that I am understanding how the code and algorithms described work in their entirety. I hope to have more successes this week and be able to have the facial recognition done by the end of next week by the very latest.
Hi everyone, I am going to be going on spring break and looking around for apartments for after graduation but there has been some progress made. Unfortunately because I don't know if I will have access to theinternet during the whole break I will be switching gears to work on the reporting application part of the project. Otherwise, my time has been spent the past week and a half working on troubleshooting some errors that a friend helped point out with gaining access and controlling the camera. Currently if I run the program without having the camera and the computer running the application connected to the same network, then the application fails. Also, I have noticed that the buttons sending the panning commands to the camera tend to stay pressed even when the user only single clicks the button.
Hey there! This is the first week back from spring break and I have made some interesting progress with the reporting portion of the application. I have found out that there are packages in Visual Studios 2017 that allows a programmer to create and manually open other Microsoft applications. So along with creating the option for the user to receive bar graphs and other graphs within the application, it will also be able to generate and save an Excel document in the active user directory. I also found out how to direct the excel file that is generated to a specific folder including one that the application creates on the user hard drive just for these reports. This part proved to be a bit difficult because I had to figure out a way to get the directory of the current user. For example C:\Users\'Username'\ will be different for every user, so I had to find a way to know the directory of the user every time in order to create the Reports folder in that user's directory everytime. I also have coded it so that if the folder doesn't exist in the User's directory, it automatically creates one to put the reports in.
Hello blog readers. This past week during capstones we had our mini demos in class. We presented to the rest of the capstone class how our projects were going. Without completely showing our applications or our code, we told the class what we have completed, what our next steps where, and some current challenges that we are facing. I found this to be quite informative for both myself and seeing where all of my other classmates were. My presentation helped me specifically because I found out that my plans and research had begun to stack up so much that I was going outside of the requirements of the project. I was getting too ambitious with certain aspects and I needed to reestablish my focus on completing the main requirements first and foremost. As far as next steps for myself, this weekend is Easter weekend so there will be some time spent with my family, but other time will be spent trying to get the recording of the live-stream to work from a user standpoint then attempt to add in the motion detection to that as well if there is time. Thank you for reading and see you next week.
Salutations, after Easter break I can say that I was able to get the recording of video to work by user input. Unfortunately, it wasn't as simple as I had originally thought and hoped. My initial thoughts were to take the stream that was coming in and pump out the frames to a file or put them in some sort of structure then save that to a file. I did find through the use of the third party library EMGU that they have a VideoWriter class that take images (a specific type of the EMGU library) and write those to a video file. This was helpful except that I needed to write some Image extensions in order to convert the Stream, which is coming through in the form of an ImageSource, which can also be a BitmapSource, to a Bitmap, then to the EMGU.Image class. Once I had figured that out with the help of a few blogs and such, I ran into the problem that the video files that are written can have one of many different compression types all for the same file extention .avi. It has taken some time, but I figured out how to invoke the program to ask me what type of compression I would like to write the file out as, but that is only if the user selects a portion to record. The other problem that persists with this, is that the compression type that the EMGU VideoWriter class allows for doesn't necessarily work with most Video readers like Windows Media Player until you download a Codec package that includes all of the obscure image compression types. So the user will have to download a package in order to read these files, but they are being generated based on the stream coming from the camera and they can be saved to the user's disk in a folder I specify. Something else I would like to have working by full demos would be having the video writing to start when the camera has detected people and if the user wishes, however that may be something to save until I have all of the other requirements completed. But as of right now the requirement that states I must capture snippets of video is working! Just to note, I have attempted to add an example to the html of the website, however I cannot figure it out tonight. So I have supplied a link to download the example video. Note for you however, you will also need this package for windows media player in order to watch the video do to the compression type which I have also included a link to below below.Link to download video example. Link to download Codec package for Windows Media Player and select all of the options within the installer.
Good evening, This is a short update midweek. Today at around 11:45ish AM, I finally debugged my issue with the EMGU libraries and the haarCascade objects that has been giving me trouble for a while. As of right now, my application is able to recognize faces when the come into and leave the camera stream. This weekend the plan is as follows: 1. Get my lab coat and goggles. 2. Set up my own laboratory in an ancient castle. 3. Embrace my inner Dr. Frankenstein. 4. Sew all the pieces of my project together. Real Goals: 1. Finish tuning the Facial Recognition portion so that it will receive data on people "coming into the room". 2. Saving that data in a structured file structure so that the application will be able to query it quite simply. 3. Ensure that the Excel work and other graph plans will take in that data and report it to the user effectively. 4. Narrow down the correct Codec compression so that the video stream can be changed so that it can record snippits not only from the user but also autonomously when motion and a person is recognized. Thank you for your time, and I realize that I don't necessarily have "proof" that I have the facial recognition working at this current moment. However, currently I am only updating a count to show how many faces that the application picks up so it is kindof difficult to show that here... None-the-less, after this weekend I will do my best to have some way of showing you that the facial recognition is working properly. Have a good night.
Hey there everyone, so this weekend I didn't get to Embracing my inner Dr. Frankenstein unfortunately, but I did run into some issues that had to be addressed with the project. As far as the goals go, I have not yet finished tuning the Facial Recognition portion so that it'll collect data but I did focus on putting the facial recognition on a separate thread so that UI would still be available when the Facial Detection is active. Second, I do have a Structure built in my Library so that once I do have the data from the facial detection, it will be easy for the program to read from. Now this will go is that there will be a directory created on the User's drive called "Detection Data" at C:/User/Username/CS_Lab_Usage directory. Within this folder the text files that will hold the data of the detection will be structured based on Year then either Month or Month then week and the text files will have information on the whole day. This way the graph building can occur weekly on its own but then also can be queried to write it however the user wishes. Third, I have found through the newer version of EMGU_CV (Version 18.104.22.16824) that the HaarCascade files that will help to determine facial detection as well as the video writer have changed and improved. So I did spend some time moving my code over to the new version. This posed some serious difficulties with the video Writer class because they removed the old Write function and the new one no longer worked with their own Image class. So I had to change my functions to take the images and convert them to a Mat object, also an EMGU object, and make a List of those to write to a file. The true bonus of this new Video Writer class however, was that it no longer required a specific codec compression in order to write the file, so with the correct file extension (avi, mp4, etc.) we can write the file as whatever we desire. For this application's purposes, I am thinking to write them as mp4 files because this offers quite a significant compression without deteriorating the images and keeps the files much lower than the .avi files I had implemented with EMGU_CV 2.4.1. Finally, because I spent a lot of time this weekend, tuning the Video recording with the Motion detection as well as working to get the Facial Recognition on a background thread, I didn't get the chance to get the graphing portion working with anything other than test data. Next steps for me are to meet with Dr. Pankratz tomorrow morning and give a thorough update on the project thus far. After that, I will incorporate his feedback and then focus on getting the facial detection to start returning data. Then I will tie that data into the graph generation and for statistical feedback. By this Wednesday, April 11th, I want to be at a point where I can set up the camera in the lab for a day or so, so that I can record actual data to use and test my algorithms for facial detection. Thank you for your time, and have a good night.
Hey there everyone, this past week has been yet another busy one. On Monday (9th) morning I met with Dr. Pankratz and we discussed some of my plans and went over my application so far which went pretty well! I then spent the next two days working towards getting the Facial Detection algorithm in my program to track people to the point where it will count a person if they are walking in the room for that hour. Because the facial detection algorithm finds the faces from an image and then fills a rectangle array with the faces it finds, I am able to take the location of the rectangle and create a "counting zone" within the frame that will determine when a person can be counted. The tricky part with this approach is making it so that the person is detected and counted only once, so people who walk too fast may not be found and people who walk too slow may be counted more than once if I am not careful enough. Then on Thursday (12th) I took some time during one of Dr. McVey's labs to test the counting of people for each hour with a higher and more diverse pool of people and found some interesting things: 1. Because the camera cannot find people if it is mounted to the ceiling, I had to mount it to the railing. 2. There are still some detection issues with the camera on the railing because it is not directly perpendicular to the people walking in, so it tends to miss people unless they look right at the camera. 3. The detection algorithm with EMGU_CV's files tend to be a bit "sexist" in that it has trouble detecting female faces. - I think this is because of longer hair generally which can cause more shadows on the face and "hide" it. 4. Some people tend to walk a bit fast so the detection algorithm doesn't catch the people when they are in the counting zones within the frame. This weekend I spent some time delving into different ideas on how to count people since it seems that there were some false positives being counted during the lab session on Thursday. Realistically the count should've been around 23-25 and at the end of the session the count was at around 60! So, after thinking I'm going to try and implement one more type of counting where I check and see if the person walking in has been detected for a minimum of 3 frames then count them. Otherwise, I could make my own class that will have an array of rectangles as well as an array of a bool of sorts to ensure that each rectangle is only counted once. Hopefully, I will get a chance to go back into the lab and collect some more data being that we are now in a blizzard warning where we are and campus is shut down. Tomorrow I will try and add some more visuals to the website to help explain my plans a bit more now that we are just a week away from presentation week. Thank you for following and continue having a good night.
Good Morrow All. It has finally come down to it, the presentations for our capstone projects. This week we take the time to present to the class and the rest of the college, as well as any others who wish to join, the fruits of our labor during this whole semester. This year capstone presentations occur Wednesday April 25th from 6:00PM-9:00PM, Thursday April 26th from 8:30AM-9:30AM as well as 6:00PM-9:00PM on the same day. If you would like to come and witness my project presentation then feel free to come to GMS 1097 on Wednesday April 25th at 6:00PM where I will kick off the presentations with my own. As far as project work this past week, I had a couple more snags but also some successes. I was able to implement and test a way for the program to only count a person walking into the lab only once. To do this, I added two arrays to the form. One is an array of Rectangles just like the one that the Facial Detection algorithm uses. The other is an array of boolean values. The way that the algorithm works, is if the rectangle of a face that is detected has a size greater than an integer value, then that rectangle goes into the second rectangle array then the whole form can see. Then, another function takes that second array of rectangles and switches the matching index of the boolean array to true. In this way, the function only counts each person once until that rectangle is no longer recognized within the frame. I then also tied the person detection algorithm to activate only when motion is detected. This made the program more efficient and further prevented any "Ghosts" or false positives from being counted. I will be sure to add some more visuals to display the functionality of this algorithm closer to my presentations for further clarification. This past weekend was also spent ensuring that the count by hour for the people coming into the lab will automatically write to a file in a directory that I have created to be queried at a later date. Unfortunately, I do not have too much "realistic" data since the past two weeks have been spent fixing the algorithms in order to ensure accurate counting. But now the groundwork is set for it to record continuously and keep accurate data to be reported. The rest of this week will be spent cleaning up code and preparing for my presentation. After that, things will continue to run down until everything is turned in and completed. After my presentation this week I will be adding the presentation, all of my code, and also more diagrams to the website here to help with understanding of the project and its content. Thank you for reading and keeping up with this project and continue to look for more updates.
Hello there once again. This last week was a bit of a panic, rush, and relief all tied together. I presented on Wednesday 4/25 and I feel that it went very well. I got to talk to some of the people outside of Computer Science that came and they said that my project was pretty interesting. Other than my presentation however it was amazing to see all that my fellow classmates were able to produce in the same time that I had. Even though not everyone felt they were at the same point as others in completion of the project, all the progress that was made this semester was incredible and lays the groundwork for future capstones to continue to improve upon. Other than presentations now, we have project defenses where we meet with the professors 1x2 and we go over any other questions and notes that they wish to see and have some clarification on. Because my defense is slated for Thursday 5/3, I will have a version of my code on this website under the "plan" tab or under its own tab, so feel free for the codeaholics to go and review my project in its working condition sometime on Tuesday 5/1. Other than that, this project is now beginning to wrap up and I will spend my time correcting "bad" code or practices and making sure that the project is as efficient and as well documented as I can make it by Wednesday 5/9. Please also look under the "Plan" tab on this website if you would like to look at the presentation that I had given last week Wednesday. Otherwise enjoy the link below. Thank you for your interest in my program and keep up with the "Plan" tab to see other things that I will add this week, including how-to's, project notes, testing notes, and ideas for extentions on the project that I didn't get to. View My Presentation in PowerPoint