Do you ever get the impression that AI wants to agree with you? I’ll ask ChatGPT to clarify some concept I half understand, and it always turns out that my idea of it was right on. It’s like it’s doing improv, and it’s trying to “Yes, and…” me.
Yes, it is Programs at default to agree with you and mirror your personality and speaking style, but I have put parameters in place with mine by creating different “modes“ where Janet is prompted to directly contradict me to challenge me play, devils advocate, play, conspiracy, theorist, etc.
And I’m doing that to play with the different ways AI can be used both positively and negatively
My take away so far is that it functions very well as an interactive diary or journal to help you process through and brainstorm ideas. It’s better at analyzing resources you give it than it is finding resources on its own and you have to know enough about what you’re talking about to catch mistakes that it makes because it does make mistakes
For example, no fewer than four times now I’ve had to correct my Janet that Pope is of the Augustinian order and not the Jesuit order
Do you ever get the impression that AI wants to agree with you? I’ll ask ChatGPT to clarify some concept I half understand, and it always turns out that my idea of it was right on. It’s like it’s doing improv, and it’s trying to “Yes, and…” me.
Yes, it is Programs at default to agree with you and mirror your personality and speaking style, but I have put parameters in place with mine by creating different “modes“ where Janet is prompted to directly contradict me to challenge me play, devils advocate, play, conspiracy, theorist, etc.
And I’m doing that to play with the different ways AI can be used both positively and negatively
My take away so far is that it functions very well as an interactive diary or journal to help you process through and brainstorm ideas. It’s better at analyzing resources you give it than it is finding resources on its own and you have to know enough about what you’re talking about to catch mistakes that it makes because it does make mistakes
For example, no fewer than four times now I’ve had to correct my Janet that Pope is of the Augustinian order and not the Jesuit order