From the course: Generative AI for Business Leaders

Key pitfalls and limitations

From the course: Generative AI for Business Leaders

Key pitfalls and limitations

- At this point, you might think, "Wow, these generative AI models are insanely powerful and I want to get started with implementing them right away." Well, not quite. Today's AI models still have many critical pitfalls and limitations to keep in mind before you start working with them. Avoiding these mistakes can increase the chances of success for your project. I would expect that many of these pitfalls and limitations will get better over time, but in the meantime, it's important to be actively aware of them. Now, let's go over the 10 top pitfalls and limitations. Number one is an oversimplified view of your objective. As we discussed, the most important thing for any project, especially in AI projects, is setting the right objective and plan. Your AI output will be as good as you set it up to be, and an oversimplified view of what your objective should be can easily lead your algorithm and your company on the wrong path. Number two is high computational costs. Generating new content often requires significant computing space, power, and time. You might hear about it as GPUs or computing power constraints. GPU stands for graphic processing units, and they're often used for parallel processing in AI applications, which can really speed up computation time. However, if your AI system does not have enough sufficient GPU resources, this can limit the system's ability to handle large data sets or generate results quickly. Furthermore, as we learned before, you might want to fine-tune the model with a private instance of your own, and that increases cost and can become prohibitive unless the ROI is there for you. Number three is algorithm hallucination. Unlike humans, AI doesn't know how to say, "I don't know." In other words, it might make up things. This is a serious limitation of AI and can occur primarily when the algorithm trying to overfit on a small or biased data set, which results in generating outputs that are not representative of the real world. It's key that you treat AI as a source of input, not a source of truth. In case AI is directly interacting with users or customers, ensure that you have the guardrails in place to avoid unintended consequences. Number four is staleness. Generative models are susceptible to stale knowledge. In other words, the agent of systems developed can be limited by the data they have been trained on. This has implications for long-term effectiveness, as they might not be able to keep pace with fast-changing contexts or environments. Number five is restrictiveness. Generative AI models do not do well when it comes to access to basic information, such as telling the date or doing basic math calculations. Many algorithms are designed to work with text or image data, and that could limit the scope and capability of those systems. Number six is interpretability. Many AI models tend to be black box systems, meaning it can be difficult to understand why they generate a given output. This is problematic when trying to effectively diagnose errors or improve model performance. Number seven is token constraints. A token refers to a unit that the generative AI system is able to process at one time. It puts a limit on the amount of text that the system is able to accommodate, whether as input to the system or output out of the system. The amount of input is critical for success. The more you can bring nuanced and unique input into your prompt, the better and more accurate your solution could be. Token constraints can limit the types of data that can be used for training or the scope of problems that the system can address. Number eight is the ability to keep state. Keeping state or memory is essentially the ability to remember past inputs and build off them in order to generate outputs. Think of it as working memory, short term or long term. For some AI applications, this ability is crucial. For example, a natural language generation algorithm might need to remember the context of the conversation so it could generate a response that may actually make sense. Similarly, a generative art or music algorithm might need to remember previous brushstrokes or notes in order to create a cohesive work. Without this capability, AI could struggle to generate accurate or coherent outputs. Number nine is data quality and availability. We already discussed the importance of feeding your AI with high-scale and high-quality data. If your algorithm is learning from bad, low-scale, or biased data, the outputs it generates will be similarly biased or poor quality. Number 10 is about ethical concerns. AI can be used to create fake content or could be used for malicious purposes, such as creating disinformation or deep fake videos. It could also create copyright violations by copying content when it's not able to create something new. This especially happens when it's trained on a very small sample size. These issues and more raise a range of ethical concerns that need to be considered. Data quality and ethical concerns are so important that I wanted to dedicate a separate video for them. So let's talk about that next.

Contents