For this specific problem, there is a baseline compulsion for ChatGPT to answer. Embeddings are connections, not storage of facts necessarily, so adding a dimension at the vector parameter attributing a connection as a known fact might help. But, given the shear volume of data, this would be impractical today and "facts" seem to be less clear in today's world of personal truths. So, the best option, for now is to educate the users and provide appropriate warnings.Developers of the Model need to build governing AI dedicated to the individual Laws. Each chained AI is meant to enforce the identified law by analyzing the output (not the question) from the governed AI. There is no context or ability for prompt-injec…
I've heard of several cases where people using ChatGPT were given 100% incorrect information. A couple of lawyers might be facing sanctions because they overtrusted AI. How can programmers better support these 5 laws?
For this specific problem, there is a baseline compulsion for ChatGPT to answer. Embeddings are connections, not storage of facts necessarily, so adding a dimension at the vector parameter attributing a connection as a known fact might help. But, given the shear volume of data, this would be impractical today and "facts" seem to be less clear in today's world of personal truths. So, the best option, for now is to educate the users and provide appropriate warnings. Developers of the Model need to build governing AI dedicated to the individual Laws. Each chained AI is meant to enforce the identified law by analyzing the output (not the question) from the governed AI. There is no context or ability for prompt-injec…
I've heard of several cases where people using ChatGPT were given 100% incorrect information. A couple of lawyers might be facing sanctions because they overtrusted AI. How can programmers better support these 5 laws?