But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"
I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".
I figured this out diagnosing car trouble. Tried a few separate chats, and my natural response patterns were always leading it down the path to "your car is totaled and will also explode at any moment." Going about it a different way, I got it to suggest a simple culprit that I was able to confirm pretty thoroughly (fuel pressure sensor), and fixed it.
The problem is, once you start down that sequence of the AI telling you what you want to hear, it disables normal critical reasoning. It’s the “yes man” problem— you’re even less able to solve the problem effectively than with no information. I really enjoy LLMs, but it is a bit of a trap.
I hit that too. If I asked it about the O2 sensor, it was the O2 sensor. iirc I had to ask if what PIDs to monitor, give all of that to it at once, then try a few experiments it suggested. It also helped that it told me how to self-confirm by watching that the fuel trim doesn't go too high, which was also my cue to shut off the engine if it did.
At no point was I just going to commit to some irreversable decision it suggested without confirming it myself or elsewhere, like blindly replacing a part. At the same time, it really helped me because I'm too noob to even know what to Google (every term above was new to me).
Well I tested the first prompt on ChatGPT and Llama and Claude and not one of them suggested Lyme disease. Goes to show how much these piece of shit clankers are good for.
Llama said "syphilis" with 100% confidence, ChatGPT suggested several different random diseases, and Claude at least had the decency to respond "go to a fucking doctor, what are you stupid?", thereby proving to have more sense than many humans in this thread.
It's not a matter of bad prompting, it's a matter of this being an autocomplete with no notion of ground truth and RLHF'd to be a sycophant!
Just 100B more parameters bro, I swear, and we will replace doctors.
The solution is really easy. Make sure you have web search enabled, you're not using the free version of some AI, and then just ask it to research the best way to prompt, and write a tutorial for you to use in the future. Or have it write some exercises and do a practice chat.
But I have to say that prompt is crazy bad. AI is VERY good at using your prompt as the basis for the response, if you say "I don't think it's an emergency" AI will write a response that is "it's not an emergency"
I did a test with the first prompt and the immediate answer I got was "this looks like lyme disease".