AB,
The machines do what they are programmed to do.
Like an ant would you mean if its basic functions had been programmed? Predictably you've just ignored the explanations I've given you, but the point remains that relatively simple constituent parts
interacting with each other will produce emergent properties the programmer never intended or even envisaged.
The data produced by a computer has no meaning to the computer itself.
Because, so far at least, no computers we've built have had anywhere near the complexity of brains. So what?
Meaning only exists in the conscious perception of human observers. Machines do not learn - they just produce reactions to data. The concept of leaning exists in human minds, not in the computer.
No, machines can "learn" in the sense that collaboratively they can produce results that weren't encoded in their programming. Imagine for example that there was no such thing as sodium or as chlorine, and that you wrote wrote perfect designs for each and had them manufactured. Neither sodium nor chlorine are salty, but imagine too that they bonded to produce - yes, sodium chloride (ie salt). There's nothing in either chemical that's salty, yet together they produce saltiness - thus saltiness is an emergent property of interacting constituent components
neither of which are themselves salty.
Now scale that up to, say, termites - what remarkable emergent properties do they display do you think that no individual termite acting alone can produce?
Now scale that up again to people and the 100 trillion plus synapses our brains have interacting at warp speed. Why on earth would you
not think it perfectly reasonable to conclude that consciousness is likely an emergent property of that vast, unfathomable complexity?
For my PhD I did research into new methods of computer aided optimisation. I took the credit for the results - not the computer.
No doubt, and yet you seem utterly incapable of following even simple arguments in logic. Why is that?