the_dunk_tank
It's the dunk tank.
This is where you come to post big-brained hot takes by chuds, libs, or even fellow leftists, and tear them to itty-bitty pieces with precision dunkstrikes.
Rule 1: All posts must include links to the subject matter, and no identifying information should be redacted.
Rule 2: If your source is a reactionary website, please use archive.is instead of linking directly.
Rule 3: No sectarianism.
Rule 4: TERF/SWERFs Not Welcome
Rule 5: No ableism of any kind (that includes stuff like libt*rd)
Rule 6: Do not post fellow hexbears.
Rule 7: Do not individually target other instances' admins or moderators.
Rule 8: The subject of a post cannot be low hanging fruit, that is comments/posts made by a private person that have low amount of upvotes/likes/views. Comments/Posts made on other instances that are accessible from hexbear are an exception to this. Posts that do not meet this requirement can be posted to [email protected]
Rule 9: if you post ironic rage bait im going to make a personal visit to your house to make sure you never make this mistake again
view the rest of the comments
look, some of these posters are being maybe overly confrontational about this, but that blade runner point was basically entirely irrelevant. for one, the replicants in blade runner are mostly biological, more akin to edited clones than an algorithmic learning machine, definitely not a computer, and certainly nothing like a 2023 LLM chatbot. obviously a replicant could be conscious and sentient, as they are similar structurally to humans which are our one source of even somewhat reliable reports of subjectivity. but the film doesn't really interrogate any of the fundamental technical philosophical ideas like subjectivity and identity, or whether Qualia are intrinsic or relational, it just assumes answers to those questions and lets the drama play out. another example, with Data in star trek, is not relevant either, because Data is made with unknown and fictitious technologies and scientific theories, which could hypothetically account for and replicate consciousness instead of simply information processing. but the data example did reference the argument that, to paraphrase, goes 'if a machine was outwardly identical in behavior to humans, this is evidence that they are conscious or capable of subjectivity', when in actuality we can not necessarily know this from outward behavior, asssuming that it is hypothetically possible for all of our behaviors to be accounted for with information processing alone (which is the reductionist physicalist take being criticised by me and some users here). just like a statistical model of language use will not reveal (or create) the definitions of the terms of the language analyzed, so too would a statistical model of human behavior not reveal (or create) the subjective experience of that behavior. to use another analogy, if i make a fully detailed model of a bladder on a computer, it will never produce real piss on my desk, no matter how detailed my algorithm may be. in the same way, if i make a fully detailed model of a brain on a computer, it will not produce real subjectivity. we can use computers to perform solely information processing tasks, we cannot use them to create subjectivity any more than we can use them to create piss.
Quality post and I fully agree with it.