Since AI has become nearly unavoidable in modern learning, it is important for teachers and students to be aware of the advantages and disadvantages of using these learning tools. One major thing that I have noticed with my own experience and from others is that AI has become almost as essential as popular search engines once were. Although this comes with many pros and cons because when using search engines it is important to find your information from the most credible sources, finding the sources that AI pulls from is very difficult because most times it is pulling from multiple sources that might not be verifiable. Whether AI is being used as a study aid, tutoring system, coding assistant, or debugger, it is important to understand its limitations. The AI system that I have become the most familiar with is ChatGPT, from constant testing and pushing the limits. I have used ChatGPT extensively while taking this course, in many small ways and big ways. Sometimes I would take the time to prompt AI but not use any of it at times because of these limitations. Therefore, I believe that I have been able to identify the advantages and drawbacks of using AI in ICS 314.
For the experience WODs, I knew that AI usage was not necessary. While it was timed assignments, we were able to do the WODs as many times as needed to achieve the time that we were satisfied with. Then if I had ever gotten to a point that I couldn’t move forward, the video lesson on how to complete the WOD was readily available to give me the assistance that I needed. For these assignments, I knew that using AI might send me down a route that might lead me to the incorrect solution based on the requirements, so it was avoided.
For the in-class practice WODs, for nearly all of them I had avoided using AI because there was also less of a time constraint for completing them. Since we were allotted the rest of the day to submit them, I was able to go home and continue to work at them until I came to the correct solution. AI usage was allowed, and oftentimes I did my best to use it as little as possible for only debugging because I knew it was important to come to the appropriate solution myself, since that would be the advantage for the in-class WODs that were timed.
The in-class WODs had a strict time constraint, which I often felt wasn’t enough time to complete without using AI assistance for someone who doesn’t have the strongest coding foundations and abilities. Whether it was making the skeleton, test cases, debugging, creating functions, or pasting the whole WOD requirement into ChatGPT, I found myself trying it all during the final 20 minutes of the WOD because I wanted to avoid getting a DNF (Did Not Finish) at all costs. Most times the results were what I needed or what was missing to complete the assignment, but sometimes I would be spending all 20 minutes prompting AI, which resulted in a DNF. I believe that if there wasn’t a strong time constraint for people who opted out of using AI, then it wouldn’t seem so crucial to use it when it wasn’t necessary.
For the essay assignments, I never used AI. I had never had the best experience using AI for writing assignments because I don’t think that it is able to properly reflect my experiences without sounding non-human. I also believe that it would take too much time to properly prompt AI to convey the message that I want when doing reflection essays.
For the final project, I admit that I had used AI such as ChatGPT and Co-Pilot the most in ICS 314. There were many times when I prompted AI to create skeletons for UI coding issues, like writing or improving code snippets, debugging, explaining code, learning a concept, etc. Although since this was a group project, I was lucky enough to be able to reach out to group members first with potential issues that I was facing before turning to use generative AI for assistance, but many times I resorted to using it to complete what I considered “busy work.” This came with many drawbacks though, since AI systems like ChatGPT don’t have access to the full GitHub repository. I would spend too much time creating prompts and reviewing code because oftentimes it would hallucinate files that didn’t exist in the repository, which made much of the generative code unusable. This was most apparent when assisting another group member with her coding issues because ChatGPT makes many ESLint errors. I witnessed many times where code would be thrown back into AI to fix ESLint errors, but then it would never be able to remove them all. As a result, their code would fail at deploying to Vercel, and they needed assistance from a group member to fix it. Situations like these made it very inconvenient because it made it much more clear that AI isn’t completely dependable and can’t replace human touch to properly review code and fix technical errors. On the other hand, AI was the most helpful to debug and simplify code or make it much more modular, which are good techniques a software engineer should know. I had used it the most in Milestone 1, where most of the brainstorming originated; with the proper prompts to generate ideas. It was a great tool to create layouts on how we may want the application to function and suggestions for UI layouts and realistic goals.
I definitely used AI such as ChatGPT a lot during the final project to learn new concepts and provide tutorials, especially for issues related to backend integration. Since the backend functionality is such an important part of the system, I was very afraid to attempt an issue without doing enough research to be able to complete it correctly with confidence. As an example, I wanted to add a column to the events table, and ChatGPT was able to provide a how-to tutorial with commands to use within pgAdmin to modify a table, which proved to be just as useful as using a search engine.
I unfortunately did not answer many questions in class or in Discord. I didn’t want to answer unless I knew the answer without the use of AI such as ChatGPT because I felt as if it wasn’t as genuine, since I felt that if someone was asking a question in Discord about a certain issue, then they must’ve already asked AI and weren’t able to come to a conclusion that was useful.
Like mentioned above, I only answered smart-questions in the Discord chat if I was able to find an answer to the problem on my own. Since AI was readily available and allowed within this class, I assumed that asking a smart-question in the Discord was someone’s last resort because they couldn’t find a solution on their own.
I don’t think I used AI in that manner much, except within the timed in-class WODs if code snippet examples were already provided and it wasn’t explained. I think many times for the experience WODs, if there were code snippets that I hadn’t seen before, they were definitely explained, or it was clear after using and testing what the code produces.
There were definitely a lot of instances where I used ChatGPT to explain code so I could quickly understand the functionality without tracing through the code myself, especially within the final project, where the lines sometimes exceed 200. More specifically, I often prompted AI to explain code for routing configurations, understanding teammates’ codes, selecting the correct resolution to merge conflicts, and terminal commands. These were the most useful when doing a group project where efficiency matters and being able to quickly recognize and understand other styles of code is essential.
I did find myself asking AI to generate code for me, especially when I didn’t know where to begin or how to properly write a function with the best practice. Using AI for this was the most beneficial because I was able to also prompt it to explain the logic of certain lines or the functionality as a whole.
While I didn’t work on the README on the final project, I did use AI like ChatGPT to paste code made by teammates and asked them to explain their code logic or methods. This further helped me understand the structure of the portion that they had worked on so that I can work on my own features and continue to align with the vision of the group. AI also helped me with documenting code by explaining how certain code blocks behaved the way they did. If it didn’t work the way I envisioned it, oftentimes it provided me with a walkthrough of the logic of each function to better my understanding of what needed to be changed.
I didn’t use ChatGPT to fix the ESLint errors because most times it would be unsuccessful because of its use of double quotes when only single quotes were allowed. I used AI for quality assurance, especially when I did not understand the problems of the code after tracing through each line. It was very helpful at identifying exactly what might be causing issues and also providing updated code that was much cleaner and organized.
AI usage has demonstrated to impact my learning and understanding in both good ways and bad. For example, the in-class WODs are where I felt AI usage has negatively impacted my learning because I sometimes pasted the entire WOD requirement into ChatGPT in order to complete the assignment within the time constraints. In these instances, I found myself learning the least because I didn’t generate any of the logic myself and used many functions that I had never seen before without ever having the time to prompt AI to explain how the logic operates. As a result, when I no longer had short time constraints, like for the final project, I used AI as a tool for my learning, prompting it many times to explain code and functionality when generating large snippets of code. This had a positive impact on my learning because I was able to use AI as a supplement to my learning and not rely heavily on it to complete each of my issues because I was able to recognize early on that it can hallucinate files and functions and create more debugging work for me and my teammates.
Outside of ICS 314, I admit that I use AI a lot more than I would like to. As I mentioned previously, I enjoy using it to do what I consider “busy work” or explain concepts for other classes. Busy work that I use AI like ChatGPT and Gemini for includes creating flashcards, summaries from lecture notes, etc. I have found it to assist in my learning in many ways outside of ICS 314, which has saved me a lot of time so I can focus on other things like studying. I have also found Gemini to be very helpful at explaining complex topics in a way that is much more digestible, especially information that is taken from a textbook on complex theories and analysis.
I have encountered many challenges and limitations while using AI within the course. A few examples include without enough prompts, AI doesn’t generate useful information, and too many prompts often lead to more errors. As a result, a lot of time is often wasted trying to think of how to properly prompt generative AI or providing too many prompts. These are scenarios where I regret using AI because it affects efficiency when needed to complete assignments with a strict deadline. AI also doesn’t know the full extent of the content that is taught and sometimes uses complicated methods that I am not familiar with, which restricts my learning. I think there will always be challenges when depending too heavily on AI for understanding complex topics and doing assignments, which is why it’s important in software engineering education to understand that AI is a learning tool and should never be depended on.
In terms of engagement, knowledge retention, and practical skill development, comparing traditional teaching methods and AI-enhanced approaches demonstrates that each method has its own limitations and strengths. Traditional methods are known to have limited personalization, often led by one instructor through lecture-based classes and group projects; while it may increase interactions, this style doesn’t work for every student, who may not be able to progress at the same pace as all other students. Although lecture, discussion, and repetition can reinforce memory, which helps build these foundational skills all software engineers must know and never forget. It is very important to build these foundational skills through hands-on coding assignments and tests because it allows for more real-life experiences. Unfortunately, many of these factors depend solely on the teacher and could lead to subjective or biased feedback. On the other hand, AI-enhanced learning can create a more personalized learning experience catered to each student’s needs and pace. It can provide immediate feedback and corrected practices with good repetition practice on fundamental knowledge. Since it is able to suggest improvements, feedback, debugging, and assistance almost instantaneously, many people may think that it is better, but AI has proven to be sometimes very unreliable and doesn’t demonstrate the best practices, so it is something to consider.
I still believe that it is better to integrate the use of AI into software engineering education because it can prove to be a super useful tool when used correctly, and it wouldn’t impact our learning as students as long as AI is used responsibly. I believe that it is important to remodel the way in-class WODs are required because that was the only assignment where I had felt the most pressured to use AI because of the short time constraints.
Overall I had a great experience using AI, testing its usage, and discovering its many limitations and downfalls in software engineering education. Although it has evolved very quickly and proves to be a very powerful tool, it’s important to note that AI should always be viewed as a tool to supplement learning whenever it’s being used.