AB,
"Contemporary software systems are becoming increasingly large, heterogeneous, and decentralised. They operate in dynamic environments and their architectures exhibit complex trade-offs across dimensions of goals, time, and interaction, which emerges internally from the systems and externally from their environment. This gives rise to the vision of self-aware architecture, where design decisions and execution strategies for these concerns are dynamically analysed and seamlessly managed at run-time. Drawing on the concept of self-awareness from psychology, this paper extends the foundation of software architecture styles for self-adaptive systems to arrive at a new principled approach for architecting self-aware systems. We demonstrate the added value and applicability of the approach in the context of service provisioning to cloud-reliant service-based applications."
Published in: Software Architecture (WICSA), 2014 IEEE/IFIP Conference Software Architecture
Perhaps you should give these people a call to tell them they're wasting their time as - according to you - self-aware software could "never" happen?
Your thesis here seems to be:
1. Software functions as long chains of cause and effect.
2. Consciousness involves "free" will that's not constrained by cause and effect.
3. Therefore software can never be conscious.
The problem with it is step 2 - that "free" will feels outside of cause and effect does not mean that it is, and moreover introducing an agent to do the "controlling" that is itself unconstrained by cause and effect raises insurmountable problems of definition (what is this "soul" exactly?) and of logic (what would "free" even mean for this "soul"?).
As an explanation, "soul" is in other words unnecessary and incoherent.