Published on January 19, 2023

By Kohava Mendelsohn

Recently, Chat GPT was released for use by the public by OpenAI. It is an extremely advanced AI chat bot that can answer queries for you. I think of it as a very advanced autocomplete system, taking in input as questions and outputting what it would expect to see as an answer. It behaves similarly to how your phone can predict the next word you will type.  

Chat GPT is very good at summarizing, providing information that is easily accessible on the internet, and sounding convincing. It fails at providing truthful information, or logically sound information. It’s very good at being wrong, but sounding right. For example, here’s an expert of a conversation I had with Chat GPT after I asked it to compose a Haiku for me:

Chat GPT’s response to the fact a Haiku it created did not have a 5/7/5 structure

This factual dissonance doesn’t mean Chat GPT is not a useful tool, just that we still need to think critically about how we are using it. It can definitely be a helpful tool when you know the facts and need to make them sound convincing, or when you want advice about a broad topic. 

For example, our latest blog post on effective engineering communication was actually written by Chat GPT with the prompt “write a blog post on how engineers can communicate effectively”. You can read it here

Chat GPT provided useful advice, and eerily similar to advice I would give, in a general context. In a specific context, I believe Chat GPT’s advice would get worse, and mine would get better, because Chat GPT is only as good at answering a question as the amount of similar questions and answers it has in its database.

In an engineering communications context, we can use Chat GPT to phrase our ideas more clearly and effectively, to give broad general ideas and overviews, and to aid as a brainstorming tool. We need to be very wary of using it as a factual reference, and develop our own fact-checking skills to ensure we are not being duped by an incorrect Chat GPT post ourselves. 

Due to Chat GPT and other new AIs, I expect soon we will face a world where fact checking well-written arguments becomes more and more important, so not only is clear communication still essential, but as engineers we have a responsibility to provide the truth and fight back where we see incorrect information being perpetuated.