I'm a few minutes in and the quantum coherence of microtubules doesn't feel like the way I'd approach this issue!
still holding strong that computers could eventually be conscious, even via totally different substrate (CPU vs. brain)
the vast majority of serious neuroscientists agree that the hard problem of consciousness is a non-science question - that doesn't mean it's not endlessly interesting!
I think consciousness is emergent like so:
brains (even ego-free ones) take external info (via biological means - hormones, electric signals from sensory apparatus, etc) and respond to that in a way that helps survival
these various processes/circuits have always talked to each other, but over time you get more in-circuit organization, as well as between-circuit
at some point (this is where consciousness comes in) you reach max benefit by investing more and more in mental states - this is moods and emotions or motivations or whatever
up until this point it's all science and not philosophy IMO
BUT THEN you get this phantom process by which there's a meta-assessment of the management processes, and that thing's job is to sense the mental states. it's turning a sensory mirror on the brain itself. that's consciousness. moods and emotions and qualia like that... they'll never be pinned down by science
the CREEPIEST thing about the mind is that essentially every decision we "consciously" make... has already been made. our brain is just doing its thing and then the ego invents an explanation (in what feels like real-time). there's a decent book about this (strangers to ourselves).
actually as I think through this now I don't think anyone's going to get AI to become conscious unless they REALLY push, and in creative, counter-intuitive ways. I strongly suspect that nobody can program an AI so that it becomes self-aware. I'm betting they need to push semi-smart AIs into survival situations, and maybe ones where they can only thrive via empathizing/communing with other AIs. you could also make them commune with humans but the simulations will be running so fast that it would be pointless/impractical to have humans in the mix. this brings up a potential chicken/egg thing - are humans social because our how our brains are, or are our brains this way because we were social? my above premise leans hard into the latter, and of course it's probably not either/or.
this paper is saying the same, in a formal way.
In the present hypothesis, awareness is a perceptual reconstruction of attentional state; and the machinery that computes information about other people’s awareness is the same machinery that computes information about our own awareness.
so yeah, you can push a computer to be smarter and smarter, but if it doesn't need to model others' thoughts/feelings, then it will likely not end up with thoughts/feelings
this is actually pretty fucking wild: self-consciousness existing primarily as a tool for modelling the consciousness of others. you only have a sense of "you" as a rosetta stone for interacting with someone else.