CS530 – Developing User Interfaces Final Projects

The Drexel Summer Quarter is coming to an end, and the CS 530 class will be presenting their final projects. Some will be highlighted below on this blog.

You can also browse projects from previous quarters of CS530 and CS338 below.

Advertisements

TAME: TrAck My Emissions

Screenshot from 2017-08-28 14-06-57

Figure 1: Dashboard 

Introduction

With the increase in the awareness, among the population, of the adverse effects of global warming, many householders are willing to alter their ways to lead a more energy efficient lifestyle. This will not only reduce their carbon footprint but would also have a positive impact financially with the reduction in their utility bills. In a study conducted by Chetty et al., it was concluded that householders desired real-time information about their resource consumption. Their findings also suggested that real-time information about resource consumption is related to improving the sustainability as well. Hence the main goal of the application is to reveal this invisible information about the carbon emissions associated with the various forms of energy resources by shining light over the impacts their resource consumption has on the environment.

Motivation

The motivation behind this application comes from the various activity trackers and expense trackers which provide the users with meaningful information which allows them to manage their workout routines or monthly expenses accordingly. For example, various expense tracking applications allow the users to plan their monthly budget and track expenses by categorizing each transaction into a meaningful category (like rent, utility bills, gas, groceries etc.) which helps the user to maintain and keep track of their finances.
In a similar way, the goal of the application provides users with real time statistics of their energy consumption so they can make informed decision to reduce their carbon emissions and lead a more environment-friendly lifestyle. Currently, the applications that exist are very complex in nature and is difficult to comprehend for novice users. The goal here would be to provide the users with a simple interface to reach the majority of the population and allow them to achieve an energy efficient lifestyle.

Audience

The main goal of the application is to provide users with real time statistics which will enable them to track their carbon emissions based on their energy consumption. Therefore this application can be used by a wide range of users spanning across multiple age groups having the basic knowledge of operating an application using a web browser on the computers or mobile devices. Hence no special skill is required to use this application.

Screenshot from 2017-08-28 14-53-41

Figure 2: Utility Detail View

 

Features

The key features of the application as compared to other carbon calculators are

  • Users have the ability to create individual accounts to keep track of their emissions data.
  • Users get detailed account of their carbon emissions as they are calculated monthly.
  • Interactive graphs allow the users to analyze their data easily.

Future Work

Currently, the application is limited to calculating carbon emissions based on electricity and gas consumption only. In the future, to get a more accurate emission figure, the functionality of the whole application can be expanded to include carbon emissions from food, travel and shopping patterns of the users. Based on the initial feedback provided by the participants, logging of utilities can be automated by integrating the application with various utility providers as well.
The application currently supports only the addresses that are based in the United States and does not cover other countries, as the methods for calculating equivalencies are borrowed from EPA of United States. Support to other countries can be expanded by importing the methods for calculating equivalencies provided by similar government institutions of that specific country. Based on the address provided at the time of registration, the carbon emissions will be calculated according to the zip code of the specific country.
Additionally, the application can be enhanced by providing users the ability to add or remove graphical widgets according to their preferences.

Closing Thoughts

Working on this project not only allowed me to implement the designing and analysis concepts taught in class but gave me an opportunity (to be honest an excuse) to learn new technologies such as NodeJS, ExpressJS, MongoDB, and Angular. You can check out the source code over here and the video describing the system here

 

 

AlgoTutor: An Easy Way To Learn Complex Algorithms

Screen Shot 2017-09-02 at 7.28.51 PMIntroduction:

AlgoTutor is a web application that helps users to learn algorithms in the field of Data Structures and Artificial Intelligence in an easy way. Generally, algorithms in these two fields are little complex and difficult to understand. Hence, the main goal of this application is to make the learning process interesting and help users master algorithms. The AlgoTutor application focuses on Data Structures and Artificial intelligence fields mainly, because these two fields define the most important aspects of computer science domain that are responsible for creating the future technology. Therefore, having knowledge of basic algorithms in the field of Data Structures and Artificial Intelligence has become the basic necessity for Computer Scientists. Currently, the AlgoTutor application helps users to understand and learn A-Star Algorithm, which is the best search algorithm and can be used as a pathfinder in various applications, robotics, and games. One such application of A-Star Algorithm is solving Sliding Brick Puzzle.

Intended Users:

As the AlgoTutor application helps its users to learn algorithms in the field of Data Structures and AI, the intended users of the applications are mainly students. Also, professors, professionals working in the field of technology and tech enthusiasts can use this application to learn and master the algorithms.

Features:

Screen Shot 2017-09-03 at 3.18.26 AM

To make the learning process easy and interesting, the application is divided into three main sections viz. Learn, Implement and Assess.

Learn A-Star Algorithm:

Screen Shot 2017-09-02 at 7.29.45 PM

  • Students can begin the learning process from Learn A-Star Section.
  • This section explains the basic concept, steps, theory, and applications of A-Star algorithm using intuitive images, videos and easy to understand explanations.

Implement & Execute A-Star Algorithm:

  • Once students understand the concepts of the A-Star Algorithm, they can use Implement Section to develop the A-Star algorithm using Java.
  • Students can use the Java Code snippets provided in this section and develop their own code. As mentioned above, the AlgoTuor application helps to implement the A-Star Algorithm to solve Sliding Brick Puzzle.
  • The execute section provides the final output of the code that gives the steps to solve the Sliding Brick Puzzle of various dimensions as shown in the image above.

Take Assessment Quiz:

  • Once students completely understand the theory and technicalities of the A-Star algorithm, they can take the assessment quiz to confirm the level of their understanding.
  • Assessment Quiz section covers some basic questions about the algorithm. After answering all the questions and submitting the test, this section provides the quiz score and the correct answers with a brief description as shown above.

Future Work:

Currently, the AlgoTutor application helps users to understand only one algorithm. But, the same structure of the application can be extended to help users understand other algorithms in the field of Data Structures and AI. Also, the application does not store the learning progress for each user separately. This drawback can be overcome by creating login functionality and maintaining the user sessions. Even if the AlgoTutor application has these drawbacks, it successfully achieves its main goal of helping users to learn A-Star Algorithm of AI in an easy way.

Implementing the AlgoTutor application:

The AlgoTutor application is developed using HTML, CSS, JavaScript and Jquery Mobile.

GitHub Link: AlgoTutor: An Easy Way To Learn Complex Algorithms.

Application Link: Click here to open the AlgoTutor Application

YouTube Video Link: AlgoTutor

IMAGICA: Instant Image Processing for Videos

Working on Videos? Write a script!

Need to find Fourier Transform of videos? Write a script!

Need to find edges in videos? Write a script!

Need to convolve a video with Sobel operators? Write a script!

Need to apply a simple gamma correction on videos? Write a script!

Write a script! Write a script! Write a script!

And if it doesn’t work? Debug!!

Not anymore! Imagica to the rescue

Imagica is a tool for applying image processing algorithms on videos. This doesn’t require to write a single line of script. You can just upload your video, select the algorithm and it does the rest for you.

Who needs it?

Researchers working in the field of image processing use some of the basic algorithms very frequently. In order to apply such algorithms on videos, they need to write elaborate scripts. These scripts need to extract all the frames from the video, apply the algorithm and then combine the output frames to create the output video. This is where Imagica comes into play. It can do all these tasks for you without having to write and debug any such scripts.

The User Interface

start

Fig. 1 The Startup Page

Imagica has a minimalistic interface, wherein, only the relevant options are visible/enabled. As seen in Fig. 1, all the buttons are disabled and the only option that the user has is to upload a video.

operation

Fig. 2 Choose Operation and Parameters

Fig. 2 shows the state of the system after uploading the input video. Here the user can choose the operation. Note that as soon as the operation is chosen, the parameters corresponding to that particular operation become visible. A very intuitive interface allows the user to choose the values of each of the parameters.

The last two figures show the state of the system when in progress and when it is done, respectively. Users are allowed to cancel a process if required. Also, the status bar shows the progress in percentage. This gives the user a feedback for the state of the system as well as an estimate of how long it might take for the entire video to get processed. When the processing is done, the user can download the video or play it in the browser.

Issues

Due to lack of enough exposure to javascript and html, I was unable to complete the actual processing of the videos. Hence, for now, the system shows a default video
as the output, irrespective of the input video and the choice of operation and its parameters.

Future Work

Obviously the first thing that needs to be done is to fix the basic working of the system. Other than that, I would like to have a drag and drop feature for the input videos. This will allow the users to upload the video without having to browse through the file input dialog box. Also,I would like to add a text input alongside the slider bar for the user to enter the parameter values as text as well as through the sliders. This would remove the limits of precision and range of values.

Try it yourself!

You can give the system a try by clicking the following link.

https://www.cs.drexel.edu/~psw36/cs530/Project/index.html

Also, the source code is available here and a video showing the system in action can be found here.

 

FlySky: The Cockpit Training Room

Introduction

It is not feasible for flight schools to assign an aircraft to its students in learning phase. Moreover, is dangerous. The flight simulators can overwhelm any new student and it is necessary for the students to be familiar with the cockpit controls. The flight schools provide PC based simulator like iPilot and realistic flight simulators like Vertical Motion Simulator (VMS) to the students which simulate a virtual flying experience but before that the students should be familiar with all the devices present in the cockpit. In order to make the students familiar with the cockpit the schools can use a PC based training room in which the screen displays the cockpit and all of its controls in a realistic manner.  The cockpit control simulated on a computer is simple and easy, thus it is a perfect place for a student to start learning about the controls as opposed to using flight simulator from the start which overwhelm any new student and can damage the simulator in panic.

The training room will contain two modes:

  • Learning mode
  • Test mode

In Learning mode, all the controls of the cockpit will be visible and accessible to the user. When the user interacts with a particular control then a popup or a speech balloon will inform the user about all the functionalities performed by the control. For example, pulling/pushing the flap lever will change the angle of the flaps of the plane for changing altitude.

fig5

Fig 1. Learning mode

In Test mode, the students will be presented with time-based questions based on the controls present in the cockpit and the options available (controls in the cockpit) will be illuminated. The user has to select the correct control in the cockpit, which is the answer for that question. If the user answers all the questions correctly, then the user will pass the training room test. After answering all the questions correctly for the 3 consecutive tests, to avoid randomly selected correct answers, the user is eligible for the flight simulators.

imageedit_6_4740928769

Fig 2. Test mode

 

User Analysis

The users of this system are the flight school students aspiring to be a commercial/Air Force pilot or the pilots requalifying for their pilot license. These individuals should have a strong hold over physics and math. The system aims at making the user familiar with all the controls in the cockpit so that they can be introduced to flight simulators without overwhelming them. The time set for the questions in the test mode will help the user to think on its feet and answer the questions correctly. The student will be assigned certifications and will be able to proceed to flight simulation. The system will not present any real-time simulation to the user, it only familiarizes the students with the controls. The users are expected to answer all the questions correctly for the 3 consecutive tests, if it does then the user is award with the certificate. If the user answers any question incorrectly then it has to start over.

 

System Description

The user will be able to learn about the cockpit controls in an interactive manner and can test their knowledge of the controls. After answering all questions correctly, the user will be certified and will be able to proceed to simulated flight training. The system informs the user about on or many functionalities of the cockpit controls and will test the user based on the training provided by itself. The user needs to interact with system in the learning mode in order to learn the different functionalities of the controls and need to test their knowledge about controls. The user needs to correctly answer all the questions of the 3 consecutive tests in order to be certified.

 

Design

The application has been designed in such a manner that it can provide maximum output from minimum input from the user. Since the Mode Selection page is the bridge which connects the Learning mode and Test mode, hence a Quit/Back button has been provided in Learning and Test mode so that it’s easy for the user to change modes while respecting the formal processes in the application (Student Information process page). The design also ensures that visual feedback is provided to the use, for example, if the user hovers over one of the option in the question then the corresponding cockpit control gets highlighted.

The system has been created using JavaScript, JQuery, HTML5, CSS and Bootstrap. JQuery helped with the designing the overall look of the application and helped with making the UI more intuitive and interactive. JQuery mobile was also used to make the UI appealing to the mobile users, for example, themes were used to map colors in buttons and icons were used to put icons like arrow and delete on buttons. Bootstrap was used to place the images and buttons on particular locations (in grids) and be responsive to the changes in the screen size. Modal have been created using Bootstrap in order to present information to the user about the controls as opposed to the alert box as the modal in and out transition is more pleasing. JavaScript have used to present alert box to the user in order to prompt the user in case any condition is violated.

 

Future Work

The application can be connected to a proper database so that the questions for the test are not repeated again and are dynamic in nature. This functionality will also help to keep proper records of the student like which students have passed the test and which have not. This functionality will also aid another functionality of providing an officially signed and digitally created certificate to the user with all the necessary information about the user.

It would also allow the system to add an additional field of school, based on which the system can identify Student ID.

The back or home button can also be attached to the view (like search bar in youtube), so that it is available for access at any time to the user.

The test page behind the modal can be blurred, so that the first question of the next test is not visible to the user.

A special image which can be used as a background image can be designed, which does not distract from the image controls of the application.

 

Closing Thoughts

The application can be found at my Drexel webpage, best viewed on laptops and Desktops (Chrome, Firefox). If you are interested in having a look at the source code then, click please here. Also if you want to see the video of the application in action then, click please here to access the youtube link.

 

CS 530 Final Project

Motivation:
I work for a vending machine company as coop student in 2017 summer. My group is doing the central bank policy study work. One of the important work is to find the security features on banknotes in different level all over the world. During the work I find it’s very inconvenient to look for information on different ventral bank websites. It will be better for people who need those information if there is a website that can search all the banknote security features information. So I decided to make a security features search engine as my project.

Audience:
The users of the system can be people from financial organization all over the world. They can also be anyone who just want to detect the security features on a banknote. They can easily find the security features they want to find in this search engine. The users don’t need to have any specific knowledge except how to use the search engine. Users who are expert in this aspect may also find useful information here. But users are asked to sign in with their information in order to prevent using this website from doing illegal things.

Future:
There’s something I plan to do but hasn’t finish, those things can be added on this application in the future:
1.Using better algorithm in the searching function to sort the data in a more reasonable way.
2.Building the filter function to filter the data in countries, level, read way.
3.Building the database to sort the username and divide them into different level.

Peaceful Ride: an efficient UI for passengers of SEPTA’s Quiet Ride cars

Hi!

Thanks for clicking on my blog post.  In this post, I will be presenting the passenger user interface for Peaceful Ride, a mobile application that facilitates confidential and efficient communication between passengers on SEPTA’s regional rail Quiet Ride cars and their conductors.

What problem is Peaceful Ride trying to solve? 

Philadelphia’s public transit system includes a set of regional rail lines which connect the center of Philadelphia to suburbs as far as 20 miles outside the city.  The regional rail lines’ ridership averages a bit over 100,000 passengers per weekday, according to SEPTA’s quarterly reports.  Each car of the train seats 110 to 125 people, with additional passengers able to stand in the aisles, according to Wikipedia.  Thus, on a typical weekday commute, over 100 people share the same physical space for a 20 to 50-minute ride. In 2009, SEPTA began the Quiet Ride program, which designates the first car of all SEPTA regional rail commuter trains running between 4 am and 7 pm as Quiet Ride cars.  In these cars, passengers are asked to remain quiet: no phone calls, no conversations with neighbors, no device audio unless mediated by headphones.  The clear majority of Quiet Ride commuters obey these rules.  However, given the open nature of a commuter train, a small non-compliance rate is enough to cause all passengers to suffer.

septatrain

A set of SEPTA’s new Silverliner5 regional rail cars

I attend school at Drexel University, in downtown Philadelphia. However, I live about 20 miles northwest of the city. Thus, I ride Philadelphia’s (SEPTA) regional rail train
to and from the city (about an hour each way) 5 days a week. One of the primary benefits I derive from riding SEPTA is that I can study and work on school projects during my commute.    Thus, I was overjoyed to learn about SEPTA’s Quiet Ride car program and I began to ride in that regularly. (In fact, I am sitting in a Quiet Ride car as I type this blog post.)

Most of the time (probably 80%), the Quiet Ride car is a great environment in which to get two additional hours of work done.  However, about twice a week, I find myself sitting next to someone who just can’t help themselves and has to talk on their phone or listen to music while riding in the Quiet Ride car. Now, I’m normally not one to judge someone for talking on the phone or listening to music.  I do those things too.  However, when you have explicitly chosen to sit in a train car called the Quiet Ride car and when everyone else around you has also explicitly chosen to sit in a train car called the Quiet Ride car and when you have explicitly chosen not to sit in one of the two to five other cars of the same train, which are not Quiet Ride cars, you should be quiet.  (Clearly this is a little personal for me…but, as Levar Burton used to say on Reading Rainbow, you don’t have to take my word for it.  You can also read numerous articles in respected Philadelphia news outlets. For example, here, here, or here.)

septatraininterior

Look at how many people can be annoyed by one guy on a cell phone!

Are there any current solutions to this problem?

SEPTA’s current solution to this problem is to direct conductors to quiet noisy passengers.  However, SEPTA conductors are responsible for multiple cars, and so are (understandably) outside of the Quiet Ride car for most of the commute. The problem here is that the people who are aware of the problem are not connected to the people who can solve the problem. Or, in more tech-geeky language, the problem is that the sensor and the actuator are not connected.  As of the summer of 2017, I am not aware of an existing technological solution to this problem.

headphonesOnQuietRide

One possible solution to the problem

How does Peaceful Ride solve the problem?

Peaceful Ride is a smart-phone-based application that attempts to connect the sensors to the actuators, or, in more human terms, connect the passengers who are perceiving the inappropriate noise in the Quiet Ride car with the conductors who have the authority to resolve it. Peaceful Ride allows passengers to send an alert to conductors who would then return to the Quiet Ride car to quiet the noisy passenger.

How does Peaceful Ride work?  (Application Flow)

homescreenportrait

The home/welcome screen

When the passenger first opens Peaceful Ride, they are greeted by a blue home screen.  This screen invites them to click “Send Alert” or, especially if they are new to Peaceful Ride, learn more about the application by tapping “Info.”

The info page explains the application to the passenger, illuminating their choices at each step, explaining that they can unmake any decision along the way, and assuring them that their confidentiality as an ‘alerter’ will be maintained.  Tapping “Back” returns them to the home screen.  The info page can be accessed from any part of the application.

Tapping “Send Alert” sends an alert to the conductor and takes the user to another screen.  This screen contains the button, “Cancel Alert,” which cancels the alert the user just sent.  Later, once the conductor has responded, an “Alert Status” button appears on the screen.  Tapping the button shows the passenger the conductor’s estimate of how many minutes it will take them to resolve the alert.  When the conductor has resolved the alert, the “Cancel Alert” button

conductorresponse

Receiving a response from the conductor

disappears and a new “Status Update” button appears, with another message from the conductor saying that the alert has been resolved and inviting further alerts, if they are needed.  After tapping this “Status Alert” button, the user returns to the home screen, where the usual two buttons (“Send Alert” and “Info”) are joined by a new button (“Send Thanks.”)  Tapping “Send Thanks” sends a thank you to the conductor who quieted the alert.  This is not a text message, more akin to a ‘like’ than a ‘comment’ on social media.

Building a fast UI

A great deal of thought went into the design of the user interface, particularly with regard to the speed and ease of use and the emotional state it will encourage in the user.  First, let’s talk about speed.  I assume that any passenger who attempts to use Peaceful Ride does not want their fellow passengers to know that they are using it, because then their fellow passengers would know that they disapprove of the noise and have reported it.  If the passenger didn’t care who knew of their disapproval, they would likely have simply spoken to the noise maker themselves, and so would not have opened Peaceful Ride in the first place.  In light of this desire for privacy, the first goal of the passenger user interface is fast execution.  Peaceful Ride is different from many mobile applications in that its success is measured not by how much time a user spends in the application, but by how little time a user spends in the application.  As such, the buttons are few and self-explanatory.  The text is short and clear, and the options are quickly executing. Notably for a communication application, at no point in the passenger user interface does the passenger have the option to send a text message.  This is intentional.

Building a peaceful UI

I further assume (based upon my own experience on SEPTA and the experience of others) that a passenger who attempts to use Peaceful Ride will open the application in an emotionally upset state.  They feel angry that someone is making noise.  They feel anxious wondering whether a confrontation will occur.  They feel guilty that they haven’t spoken to the noise violator personally, and resentful that the noise maker has put them in this position.  Thus, the second goal of Peaceful Ride is to increase the user’s peacefulness.  The color pallet is blue, and the text is inviting and reassuring.  The lack of options and the self-explanatory nature of the buttons also serves the goal of instilling peace in the user by directing them gently to their goals.  Having the option to thank the conductor allows the passenger to do something nice for someone else, which may counter their perception that they have done something not nice for someone else by reporting the noise maker.

Next Steps

At this point, the application is not fully functional. Instead, all that currently exists
is the passenger user interface in demonstration mode.  Two significant blocks of code remain to be written to create the fully-functional application: the conductor user interface and the back end.

The conductor user interface would have the same need for efficiency and speed that the passenger interface needed, though for a different reason.  Whereas the passenger desires speed so that they can remain anonymous while posting an alert, the conductor desires speed so that the additional responsibility of Peaceful Ride adds as little time as possible to their already busy schedule.  Conductors have a long list of responsibilities, and, in order that this technology be adopted and serve the best interests of customers and conductors, it must take as little conductor time as possible to use.  We propose a simple user interface, with only two buttons: “Send Response” and “Alert Resolved.”  When no alert has been sent by passengers, the conductor interface would be blank or contain a simple message: “No Quiet Ride alerts.”  When an alert has been sent by passengers, the conductor’s interface would change color, and the phone would vibrate, allowing for quick observation.  Further, the “Send Response” button would appear.  Tapping that button would give the conductor a list of numbers (1, 2, 3, 5, 8, 10).  The conductor would choose the number representing the number of minutes it will take them to arrive in the Quiet Ride car and quiet the noise.  This number will be automatically inserted into a pre-composed message and sent to all customers who have sent or will send an alert before the conductor arrives in the Quiet Ride car.  At this point, the “Send Response” button would disappear and be replaced by the “Alert Resolved” button.  Once the conductor has quieted the Quiet Ride car, they would tap the “Alert Resolved” button, and the application would automatically send a pre-written message to all passengers who had sent the alert, telling them that the alert was resolved and encouraging them to reach out again if noise develops a second time.  After the conductor tapped “Alert Resolved” their interface would return to the initial setting.  If passengers cancelled an alert before the conductor responded to the alert, the conductor’s interface would also return to the initial setting, and display the text, “Alert Cancelled.”

In addition to the two interfaces, the application would need to perform three, non-trivial, behind-the-scenes processes.  First, when a passenger sends an alert, the application would need to discern which train the passenger was on (and if the passenger was on any train at all.)  This is needed both to ensure that alerts get routed to the correct conductor-team and to decrease the impact of pranksters who would use the application to spam conductors from another location.  Thus, the application would need access to the passenger’s geolocation and the conductors’ geolocations.  Armed with that information, the application would send the alert to conductors whose locations put them within an appropriate distance from the passengers (perhaps 200 meters.)    If a user’s phone exited the 200-meter radius at any time during the alert process, their alert would be dropped.

Second, the application would need to aggregate the list of alert senders.  The list of alert senders would serve two purposes.  First, the application would wait to send an alert until the number of people in the list had crossed some threshold.  (Perhaps three people need to send the alert before it is forwarded to the conductor.)  This would further protect against bad actors and would protect against hyper-alerters: people who send an alert too frequently.  Second, this list would allow the conductor’s message to be sent to every person who sent an alert automatically, without additional input from the conductor.

Finally, the information would need to be communicated between passengers and conductors in a confidential manner.  This problem has been solved, as demonstrated by many other texting applications, and would only need to be applied to this context. My hope is to continue this research in these ways in the future.

Closing Thoughts

If you are interested in seeing a demonstration of the application, you can find one at my Drexel webpage.  (Recall that the application is primarily a mobile application, thus, to best experience the application, you should navigate to that site on your phone, or look at it with Chrome’s developer tools.)  Also, if you are interested in seeing the source code, you can find it on my public github repository here.  If you are interested in talking with me about the research, feel free to reach out to me via email at rr625 at drexel dot edu.  I’d love to talk.

Capturing Meaningful Agents’ Behavior in Blocks World

When developing new spatial planning algorithms for robotic systems that will collaborate with people to complete a task (e.g. moving objects around a room), it is important to know how the robot’s spatial movement-based decisions converges or diverges from the human team member’s decisions. For this online simulation-based study we are looking at the human decision-making processes involved with moving virtual blocks. Participants will complete a simulation by moving a human avatar around a 14×14 grid to push blocks from set start locations to set end locations. The entire study should only take approximately 20 minutes to complete. Path mapping analyses will be conducted to determine if there is one or multiple “human” way(s) of competing the task.

You can view the video here.

https://youtu.be/dQajMIFJu-8

You can run the applet online by following these directions. I have used a code signing certificate to sign the jar file so it is able to run on high or very high as long as the site is added to the java control panel.

Open Control Panel > Java > Java Control Panel > Security

Exception site list add

https://www.cs.drexel.edu/~rwb64/

Open this site under Internet Explorer

https://www.cs.drexel.edu/~rwb64/cs530/project/blockGamePage.html

By using an applet, the intent is to use Mechanical Turk to gather a larger set of data in a follow-on study. In the present study, we take an approach comparing multiple human routes. Future work will include comparing human and algorithm-generated routes post hoc. Then we will assess both differences in human decision-making processes across varying levels of environmental complexity and task difficulty, as well as to compare findings to a newly developed robotic algorithm at MIT to complete the same task. Further studies will need an overhaul of the code base to include an application programming interface (API) to allow the robotic platform to perform actions within the simulated environment. The evaluation will include both a human and an intelligent agent working together to complete missions. Again, the hope is to learn how to work together as a manned unmanned team.