I have been working with AI since the ‘90s, case-based reasoning since 2010, and generative AI for the past year. I have seen it go from a wonder in the eyes of the more geeky public to an eyesore as they put in terrible prompts and then berated the answers.
I have heard of lawyers using hallucinated case law to justify their position in court. And perhaps most egregious: I read there was Chicago pizza in Florida. When I asked chatGPT it told me the name Nancy’s Pizza, with an address only 30 minutes from home. How could I not know: Nancy’s Pizza, who I give credit for inventing Chicago Pizza, actually had a location nearby. As I calmed down and started planning a trip, I checked Google Maps. No Nancy’s pizza in Florida. Oh no… How could I not have supported Chicago Pizza, and now it was gone? Using old-fashioned Google, I learned there had never been Nancy’s pizza in Florida. Just a Hallucination. This was probably 6 months ago, before chatGPT 4o. I just tried and ghatGPT only recommended two real pizza places in the area.
Now I describe AI’s timeline as “one AI month is equal to 8 dog years” So now I ask chatGPT (or the other AI models I am leaning toward) to justify their answers and warn me of hallucinations.
I have been spending significant time trying to identify where generative AI goes wrong and how to remove hallucinations, first with prompt engineering, then with fine-tuning, and RAG (using our own data or third party data) lowering the temperature (reducing creativity of the AI model) and more.
I am fortunate that I am working with AI from several directions: Non-Profit organizations providing answers to sometimes hard questions, and SEERai, describing the characteristics of engineering products so the SEER models can provide cost, schedule, and risk.
I realize anything I write may be wrong in a few months. For example, just this weekend, a new AI model based on Llama was introduced that allegedly self-corrects errors. And now people are contesting it, illustrating my dog year analogy.
I will be blogging about this journey and the lessons learned.
Creating estimates that stakeholders can rely on is a critical part of project planning and execution. Estimates inform key business decisions, influence budgets, and set expectations for project success. Trust is the bedrock of any effective estimate. Stakeholders depend on accurate figures to guide important decisions, and for an estimate to be trusted, it must reflect a deep understanding of the project’s technical challenges. It also needs to be rooted in reliable data and constructed with proven methodologies. …
Delivering projects on time and within budget is a challenge every project manager faces. To achieve this goal, a strong project controls framework is essential. Project controls encompass the processes, and the tools used…
Discover how convolutional neural networks (CNNs) can enhance multi-class image recognition in manufacturing. Learn about model development, tools like TensorFlow, and key takeaways from a comprehensive guide by Dr. Christopher Rush at Galorath….
In software development, accurately measuring the size and complexity of your applications is crucial for success. One powerful and widely used method to achieve this is Function Points. In this blog, we’ll examine the concept of Function Points, explore how they work, and uncover the remarkable benefits they offer to organizations. Discover how this approach can elevate your project management and strategic planning to new heights. …
Dan Galorath
Dan Galorath is a software developer, businessman, author, and founder and CEO of Galorath.
Share this:
Get access to news
Sign up to get relevant news in your inbox - never miss it!
Your Vision. Our Expertise. Let’s Build Success Together.
Every project is a journey, and with Galorath by your side, it’s a journey towards assured success. Our expertise becomes your asset, our insights your guiding light. Let’s collaborate to turn your project visions into remarkable realities.