HACKER Q&A
📣 amichail

Is it disappointing that ChatGPT does not illuminate human intelligence?


How ChatGPT comes up with a response is rather mysterious and doesn't reveal insight into how intelligence works — and hence don't shed much light on how human intelligence works or how one might simulate it in a more direct manner on a computer.

Isn't this a disappointing aspect of recent AI advances?


  👤 kafkaesqueKino Accepted Answer ✓
Chatgpt doesn't do anything independently or self instructed , it's a llm running on a transformer, it predicts the next characters/words based on its training and fine-tuning process . It doesn't understand what it is outputting, it may seem so , because the openai team kept shoving hacks and patchs into the model to increase its efficiency. So because it actually doesn't think , there is little we can derive from it about thinking.

👤 ftxbro
There was some paper where they sticked ones head into an MRI and had them look at a photo, and they used fMRI (detect which parts of brain are most active) and with a calibrated linear transformation from brain activity to stable-diffusion weights in some layer they were able to read the subjects minds with their fMRI.

In other words, once it was calibrated they could use fMRI to read minds by getting stable diffusion to draw what the subjects see, using the mapping from fMRI activity to the model weights.

If that is true then it means there is some light on some parts of human intelligence.


👤 nh23423fefe
No it isn't disappointing. Why would you have that expectation? It also doesn't illuminate dog intelligence. Or did you expect to get that information from boston dynamics?

👤 catchnear4321
> How ChatGPT comes up with a response is rather mysterious

Have you… asked it… how it… works?