AB,
The fact is that we are free to think about the strings and consequences before making a conscious choice - they may influence, but they do not dictate our choice. Despite being consciously aware of good and bad consequences, we are still capable of deliberately making a bad choice.
You remind me a little bit of a bluebottle endlessly banging against a closed window, while all the time the window next to it is open to all outdoors – only you will not or cannot ever stop banging your head long enough to try the open window.
As reason and logic seem to be perennially lost on you, try this instead as a thought experiment: without troubling to consider them, imagine just for a moment if you can that all the reason and logic and evidence that undo you are correct. That is, try to imagine a reality in which there’s no separate “we” somehow floating free of our thoughts, but rather that the “we” of colloquial experience is just the perceptual manifestation of a single, integrated whole doing vastly complex processes all the time, and of which there is only partial awareness. That is to say, imagine that the sensation of decision-making is in fact just processes playing out, and that it only
feels like there’s a little man at the controls deciding what to do.
I know this is difficult for you, but can you try at least to imagine such a thing? If you can imagine that, can you also see that in this model there’d be no need for the little man for life to carry on
exactly as it does nonetheless – that is, at a functional rather than an experience-driven narrative level we’d all be ticking along just a we do?
Now if you can grasp this, you should be able to grasp that this model would be indistinguishable from the model you feel you inhabit
at an experiential level: there’d be no difference at all in fact. And if you can grasp this then, finally, you should be able to see that there’d be no explanatory gap for your little man to fill. In other words, there’s not only no evidence at all for his existence, but nor is there any
need for his existence.
Still with me? OK, final request: your stock response to arguments with which you can’t engage is, “the fact that you can write this at all must mean…" yada yada nonsense. We’re at a more fundamental level of abstraction here though that makes this assertion otiose – it’s already falsified by the model, so you’d need to invalidate the model
a priori before you could deploy your
a posteriori reasoning (such that it is).
So go on then – surprise me after all this time by actually engaging with the argument that invalidates the mantra rather than just repeating it.
Can you do that?