AI in Project Management

WHERE WE ARE AND HOW WE CAN MOVE FORWARD

Cornelius Fichtner, with his course, AI for Project Managers, online at Project-Management-Prepcast.com, presents a detailed and comprehensive overview of how Project Managers can use today’s Artificial Intelligence systems to make their work easier and more efficient. A key challenge, however, is understanding what AI can do, and being able to communicate that to various stakeholders, including project sponsors, team members, and people who are effected by the project and the system.

Fichtner provides that needed understanding of how AI works today and how to use ChatGPT, and ChatGPT enabled tools. The 15-hour course is like a 1-credit seminar on AI for an MBA or other graduate program from a lecturer who clearly knows and loves his subject. It is also worth 14.75 PDUs for PMP re-certification.

Fichtner divides his course into seven (7) sections:

  1. “The AI PM Revolution,”
  2. “Fundamentals,”
  3. “Ethics, Integrity & Transparency,”
  4. “Data,”
  5. “Generative AI,”
  6. “The Future: AI-Driven PM,” and
  7. “AI in Action: Expert Insights.”

The AI PM Revolution” discusses how AI will change Project Management as a profession, and how Project Managers will adapt.

Then, he presents the “Fundamentals,”  including:

  • “AI Basics,”
  • “Machine Learning,”
  • “Natural Language Processing,”
  • “Large Language Models.”

This is logically followed by a series of videos on Ethics, including “Responsible AI,” a series on Data, focused on data quality and data literacy, and a series on Generative AI.

Then, “The Future: AI-Driven PM” section includes,

  • “Becoming an AI-Powered Project Manager,” and
  • “Developing the AI-Powered Project Team,” parts 1 and 2.

The course concludes with “AI in Action,”

  • “Beyond Theory – Real-Life AI Solutions in PM,”
  • “Data Literacy for Project Managers in the Age of AI,”
  • “AI in Project Management: Promise vs. Practice.”

This course provides a terrific starting point for project managers who want or need to start using AI or who want or need to deploy AI Systems. Fichtner tells us where we are, where we can go from here, and how we might move forward. Early adopters, he says will have an edge on people who don’t understand or use AI capabilities.

For people who want to know how we got here, additional background on AI can be found in “Big Data,” by Victor Mayer-Schonberger and Kenneth Cukier, Harper Business, 2014, “Big Data @ Work,” by Thomas Davenport, Harvard Business Review Press, 2014, and “Minds Over Matter,” by Jeffrey Rothfeder, Simon & Schuster, 1985.

As incredible as AI is today, and as powerful, capable, and valuable as it will become in the future, AI systems seem to be as uncurious and non-goal-oriented as they were when Rothfeder was interviewing Edward Feigenbaum, Roger Schank, and other pioneers in the 1980’s. However, the big questions today are philosophical and ethical, not technical.  Who owns the system? What are the questions which the system owners want to investigate? What are the problems they want to solve? As Fichtner points out, the biases of the trainers will become the biases of the AIs. We need to understand the biases in the systems we use and in the systems we create. We need to know the sources of data on which AIs have been trained.

For example, in mid-July, 2023, I witnessed a trivial “AI Hallucination.” I asked ChatGPT, “Who wrote, ‘Sympathy for the Devil’?”

It answered “Bob Dylan.”

I corrected it, typing in, “No, ‘Sympathy for the Devil’ was written by Mick Jagger and Keith Richard of the ‘Rolling Stones.’”

ChatGPT responded with what a teacher might call “a lame excuse,” writing back, “Mick Jagger and Keith Richards were trying to write in the style of Bob Dylan.  I am sorry. I am still learning.”

Parenthetically, the fact that ChatGPT recognized similarities between Bob Dylan’s lyrics and those of Mick Jagger and Keith Richards illustrates that it can learn by analogy, which, in and of itself, is amazing. And is also an indication of intelligence.

On Sept. 9, 2024, I asked ChatGPT how it corrects errors – how it learns. It responded, in part,

“If I provide an incorrect response, and you correct me, … Here’s how I would typically handle the situation:

  1. Acknowledge the Mistake: I would recognize that I gave the wrong answer and acknowledge your correction.
    1. Provide the Correct Answer: I would follow up with the correct information, incorporating your correction to ensure the conversation proceeds based on accurate details.
    1. Learn from the Error?: While I can’t “learn” from an individual conversation (I don’t retain data or memories across sessions), in that specific moment, I can adapt my response based on your correction and provide the correct information going forward.

Errors happen, and I rely on feedback during the conversation to correct misunderstandings in real time.”

It also said part of the process is:

Check Against Known Data: When you offer a correction, I cross-reference it against the knowledge I was trained on. If your correction contradicts the information I “know” to be accurate, I wouldn’t automatically assume the new information is correct.”

More than simply an AI Hallucination, this is an example of ChatGPT extending it’s knowledge, another demonstration of intelligence.

This demonstrates both ChatGPT’s ability to “learn” and also OpenAI’s commitment to transparency. IT also suggests that Fichtner’s videos on Ethics, Responsible AI and Data Bias may be the most important in the course.


Lawrence J. Furman, MBA, PMP, is currently exploring leadership and innovation in “Adventures in Project Management,” which he plans to publish in 2025.  He did not use ChatGPT, Bard, or any other LLM in writing or editing this essay.