Generative AI refers to a subset of artificial intelligence models that are designed to generate new, previously unseen data that mirrors some set of training data. Rather than just making decisions or classifications, these models actively produce content.
Some common applications for Generative AI use the technology in chatbots, virtual assistants, and language models (like ChatGPT) to generate human-like text based on the input it receives. Other uses have extended to include creating poetry, writing stories, and even creating art.
Given its ability to generate language, another popular use case has been around generating programming code. As a type of virtual assistant, programmers can ask a GenAI application to write a quick function or generate an entire program. This can save programmers extensive time they would otherwise spend writing the code themselves. The programmer still needs to review the generated results and oftentimes make corrections or adjustments to get it just how they desire, but it can be a significant time savings.
Another less-tapped use case of using GenAI for writing code is for learning new environments. Consider a systems administrator who’s tasked with moving from an on-premise environment where they’ve historically written shell scripts to help automate tasks, to now leveraging the public cloud and using available new services. Where does the administrator get started? How do they learn this vast new ecosystem? This is where GenAI can help.
As an example, most public cloud providers have support for Terraform, an open-source “Infrastructure as Code” tool, created by HashiCorp. Trying to learn about the cloud can be overwhelming, and trying to learn Terraform will just exasperate the cognitive load. On top of that, trying to understand how each cloud provider is a little different just compounds the problem. Using a tool such as Terraform can provide a common interface across the cloud providers, but how do you get started with Terraform? To use a GenAI solution such as ChatGPT, you could provide ChatGPT a prompt such as
“create me a terraform template for AWS that creates a Linux virtual machine within an autoscaling group. It should have a load balancer allowing only port 443 in along with corresponding network security groups….”
Leveraging ChatGPT this way can be a huge time savings and help train admins on the new capabilities within the cloud.
The Vast Landscape of Modern Programming
The last decade, and even the last two years, have witnessed an unprecedented surge in the development and adoption of libraries, frameworks, and tools across various domains of software and data science. Here’s just a glimpse at the things available to programmers supporting modern initiatives:
- Programming languages: Once it was Java, now some could argue Python has become the most popular programming language in the world, due to its simplicity, versatility, and extensive library Other languages that have seen significant growth in popularity include JavaScript, TypeScript, and GoLang.
- Web development: Frameworks like React, Angular, and Vue.js have made it easier than ever to build modern and interactive web applications. These frameworks are also highly scalable, making them ideal for large-scale enterprise
- Data science: Python is also the most popular programming language in the data science Libraries like NumPy, pandas, and sci-kit-learn provide powerful tools for data analysis, machine learning, and deep learning.
- Cloud computing: The rise of cloud computing platforms like AWS, Azure, and GCP has made it easier for developers to deploy and scale their applications. Cloud-based services like managed databases, serverless computing, and machine learning platforms have also simplified the development process.
- Artificial intelligence: AI is one of the hottest trends in technology today, and there has been a surge in the development of AI libraries and Some of the most popular AI frameworks include TensorFlow, PyTorch, and scikit-learn.
In addition to the above, here are some other specific examples of libraries, frameworks, and tools that have seen significant growth in adoption in the last several years:
- Docker: Docker is a containerization platform that allows developers to package their applications into isolated containers that can be run on any platform.
- Kubernetes: Kubernetes is an orchestration platform that automates the deployment, scaling, and management of containerized applications.
- Git: Git is a distributed version control system that is used by millions of developers around the world to track changes to their code.
- GitHub: GitHub is a cloud-based Git repository hosting service that provides a variety of features for code collaboration and management.
- Jenkins: Jenkins is an open-source continuous integration and continuous delivery (CI/CD) platform that automates the software development
- Terraform: Terraform is an open-source infrastructure as code (IaC) tool that allows developers to define and manage their infrastructure in code.
- Prometheus: Prometheus is an open-source monitoring and alerting system that is used to track the performance and health of applications and infrastructure.
- Grafana: Grafana is an open-source analytics and visualization platform that is used to visualize data from Prometheus and other monitoring systems.
The surge in the development and adoption of libraries, frameworks, and tools has made it easier for developers to build and deploy high-quality software applications. It has also enabled developers to focus on the core logic of their applications, without having to worry about the underlying infrastructure. This explosion not only underscores the rapid pace of technological innovation but also the challenges, and opportunities developers face. Keeping up can be daunting, but each new tool or library offers ways to solve unique problems, improve efficiency, and push the boundaries of what’s possible.
Cognitive Overload for Developers
I have often struggled with “imposter syndrome” trying to stay current and keep up with people half my age. My years of experience start to feel that they barely scratch the surface of how fast technology is maturing.
The decision between specializing deeply in a few technologies or maintaining a broad understanding of many is an ongoing dilemma in the tech industry. Specialization can lead to expertise in a niche area, making an individual highly valuable, but it comes with the risk of obsolescence if the chosen technology falls out of favor.
Moreover, employers often prioritize experience in the latest technologies, potentially pressuring developers to learn them solely for employability rather than personal interest. On the other hand, a broad skill set offers adaptability to evolving trends and reduces the risk of obsolescence, but it may lack the depth of expertise found in specialized roles.
The rapid evolution of the tech landscape also places an implicit expectation on developers to invest personal time in continuous learning, which, while enhancing skills, can impact work-life balance and contribute to burnout. The abundance of tutorials and resources further complicates the challenge of filtering and selecting the most relevant information.
New tools might not always play well with existing systems, leading to integration headaches. Not all resources are free. While there are many free resources available, specialized courses, certifications, or premium tools come with costs. Keeping up might also mean financial investments.
Generative AI as a Learning Companion
This brings me to the heart of this paper, and how programmers can use GenAI to reduce the cognitive load while staying current and keeping up with their peers.
As I was working on my last project, I was running into the challenges of some rather hard and complex algorithms for identifying language intent while coding a chatbot. Understanding how words relate can be challenging. For example, properly determining that “royal” and “king” are related, or that “mug” and “cup” are related seems easy enough when we think about it as humans, but coding this to be fully dynamic and accurate can be a much larger lift. As I tried various techniques the results were inconsistent. When I got “royal to king” and “mug to cup” working the solution was still failing on “stool to chair”.
Using GenAI solutions to do my research helped me identify many options including fuzzy string matching, advanced natural language processing (NLP), tokenization, word embeddings, and even creating my pre-trained models. I don’t think I would have been able to tackle this challenge elegantly had it not been for GenAI and its ability to offer potential solutions.
The benefits didn’t just stop at it offering solutions, but it was also able to offer sample code that I could easily test and put it through the paces. The code often only partially worked, and any time I fixed one issue, it broke another. However, what I discovered was that the GenAI solution didn’t fully remember what it had already solved. If something didn’t work, I’d tell the GenAI what the issue was. When it gave me a fix that worked, I often found a second issue. As we iterated on the second issue we’d end up reintroducing the first issue, and round and round we went. The more this happened though, the better understanding I had for both the GenAI and the code it was making, along with the subtle changes it was trying to tweak. Eventually, I was able to blend the various solutions myself to get the right solution.
I consider myself a relatively strong Python programmer, but as I was iterating solutions I noticed the GenAI code snippets were providing Python constructs I didn’t recognize. As this happened sometimes it made perfect sense and I learned a new technique. Other times I had to go back to the GenAI and ask for further clarification, which it provided and allowed me to better understand what and how it was achieving its outcomes. In the end, it just improved my knowledge of the language and I’m better for it.
As I reflect on the problem and how I went about solving it, what ended up taking a day or two to resolve would have taken me weeks or more if I had just relied on traditional methods such as Google and message boards to help. I’m not sure I would have ever gotten to a satisfactory solution and would have ended up with an inferior product. This opened my eyes to a world of endless possibilities and feeling like I can tackle anything thrown at me.
It’s important to note, that a lot of the code snippets didn’t work as requested, and required thorough testing, tweaking, and massaging to get it right. I tried various solutions including Open AI’s ChatGPT 3.5 and 4.0, and Google’s Bard. In the end, I found ChatGPT 4 provided code snippets that tended to compile more reliably, without coding errors, whereas ChatGPT 3.5 often
had issues and I had to iterate more just to get the code to compile. I think Bard provided good results but required a lot more detailed prompting to get full results, otherwise it tended to provide much more terse answers.
There’s a lot of controversy over using public GenAI solutions in the enterprise, and what the risk to data exfiltration might be. Some organizations are worried about losing intellectual property if programmers are using cut/paste into the prompt engine of a GenAI, or even leaking PII or other private data while resolving a problem. Many tools and solutions can be put in place to control the risk, while still empowering programmers who need and want to leverage GenAI.
In the end, GenAI can improve developer skills, teach new solutions and technologies, increase the speed at which a developer produces results, help optimize code make code more efficient, and more. Having strong governance and controls around using GenAI is important, but to close it off would be a detriment, so find a way to do it safely, and have happier programmers.