Hi everyone,
Here is a BBC article about AI and the mysterious aspects of it.
https://www.bbc.com/future/article/20230405-why-ai-is-becoming-impossible-for-humans-to-understand***********
Many of the pioneers who began developing artificial neural networks weren't sure how they actually worked - and we're no more certain today.
As Cowan and Taylor stood and watched the machine work, they had no idea exactly how it was managing to perform this task. The answer to Taylor's mystery machine brain can be found somewhere in its "analogue neurons", in the associations made by its machine memory and, most importantly, in the fact that its automated functioning couldn't really be fully explained.
But the mystery gets deeper still. As the layers of neural networks have piled higher their complexity has grown. It has also led to the growth in what are referred to as "hidden layers" within these depths. The discussion of the optimum number of hidden layers in a neural network is ongoing. The media theorist Beatrice Fazi has written that "because of how a deep neural network operates, relying on hidden neural layers sandwiched between the first layer of neurons (the input layer) and the last layer (the output layer), deep-learning techniques are often opaque or illegible even to the programmers that originally set them up".
I'd add to this that the unknown and maybe even the unknowable have been pursued as a fundamental part of these systems from their earliest stages. There is a good chance that the greater the impact that artificial intelligence comes to have in our lives the less we will understand how or why.
When it comes to explainable and transparent AI, the story of neural networks tells us that we are likely to get further away from that objective in the future, rather than closer to it.
***********
Frankenstein comes to mind.
Cheers.
Sriram