Within the past few years, I have read a lot of comments in social media and forums where people respond with accusations of people being a "Russian bot" or "Chinese bot". I can understand what they mean if they use "shill", but using the word "bot" means to me that the original poster is an autonomous web scraper or automatic program to vigorously defend or attack something. But the thing is, even if there is a nation-state-coordinated agenda to subvert online opinion, these things are still being conducted by real people, which I am fairly certain are still just regular humans. You can argue that programs like GPT-3 are generating the posts, but I think it's obvious by the flow and logic/emotional response chain between accusers and accusees that it is between real people because GPT-3 simply cannot handle pulling in knowledge of theory-building in a coherent, substantive way (at least not yet)—GPT-3 can only give the facade of having complex, novel thoughts and responses as is typical of heated discussions, but usually it is quite vapid. GPT-3 also doesn't typically make typos, nonconformist "nicely formatted" writing of humans, or use new information too well.
My conclusion then is that real people are pushing their viewpoints still, not "bots", but I may be misunderstanding the way the term is used. Or, has "bot" changed in meaning/use? Is it just a word used to effortlessly shut down opposing viewpoints? I'm trying to seriously understand how people use this word now.
But, as everything is virtual, you have no direct way to proof whether some virtual actor is human or software. Only the quality of their actions can give you hints on it. So people go around accusing people left and right if their arguyments are to much following the protocol, and don't show signs of self-thinking by their understanding.
This on this other side has an additional meaning in form of human bots. A bot technically does not nead to be all virtual and digital, it could be also a meatbag with wetware following a guidebook, just acting out pre-definend routes. You see this usually in companies, call-centers and propaganda-jobs.
This is BTW a funny roundtrip, as the original meaning of robot was coinend for humans who do braindead labor, just following the rules, not thinking on their own. It just later moved to machines.
You kind of answered your own question. Many people never differentiate between bot and shill, instead preferring to lump those two words together. A bot is programmatic; a shill is a human agent that likes to amplify messages or spread disinformation manually. Although some shills may still use some level of automation, for example, often using several accounts at the same time using some bespoke software arrangement.
This word has been abused to the point where it has become meaningless just like how I keep hearing the word 'literally' used in every sentence. Even worse, it's used carelessly as an insult to criticise others or used towards anyone who disagrees with them rather than describing the actual quick and automatic actions of an enity being closer to a 'robot', 'botnet' or 'aimbot-like' behaviour.
> Within the past few years, I have read a lot of comments in social media and forums where people respond with accusations of people being a "Russian bot" or "Chinese bot". I can understand what they mean if they use "shill", but using the word "bot" means to me that the original poster is an autonomous web scraper or automatic program to vigorously defend or attack something.
I think shill is a better word here to describe this. If people cannot distinguish between a real user account and a bot account then either this bot account has passed the turing test or Twitter is not doing enough to disclose actual bot accounts. They should do what Keybase did with disclosing bot accounts by signing them up differently and then tying them to the real account holder. Google ReCAPTCHA + phone verification for sign-ups solves the mass automated sign-up issue, distinguishing between real and bot accounts by linking them up with 'owners' would clear this mystery up.
> You can argue that programs like GPT-3 are generating the posts, but I think it's obvious by the flow and logic/emotional response chain between accusers and accusees that it is between real people because GPT-3 simply cannot handle pulling in knowledge of theory-building in a coherent, substantive way (at least not yet)
While it can generate very convincing sentences, it has the limitation of always accepting and responding to nonsensical input, especially in very divisive / heated discussions which is likely to be the case. The output will also be limited to data up to 2019 which it was trained on.
> My conclusion then is that real people are pushing their viewpoints still, not "bots", but I may be misunderstanding the way the term is used. Or, has "bot" changed in meaning/use? Is it just a word used to effortlessly shut down opposing viewpoints?
Exactly. This is almost exactly what this Twitter blogpost outlines about the misunderstanding of the word "bot" [0]. However, My point still stands: If the accuser cannot prove that a particular account has the properties of being a "robot" which issues replies very quickly in an automated fashion [0], then perhaps the fault is on Twitter not being able to find a way to disclose if these accounts are either a "bot or not". They should overhaul the sign-up process like the way Keybase dealt with bots: Link the bot account with a real account and disclose which real account owns that bot account.
[0] https://blog.twitter.com/en_us/topics/company/2020/bot-or-no...