- cross-posted to:
- china@lemmy.ml
- cross-posted to:
- china@lemmy.ml
As the article mentions, the AI assistants being rolled out are required to be supervised by human employees, meaning they are more like pocket calculators than AGI robotic workers. There doesn’t really seem to be much of an issue here tbh.
Exactly, as long as the human bears the responsibility for the work, I don’t see any problem with this either.
I think AI is stupid, no matter if it’s China or America that is doing it. 🤷♂️
I think that some of the criticism levied at AI, certainly in the way that it is used under neoliberal capitalism, is absolutely valid. And i have my own worries about how it may affect human development going forward when you can essentially “cheat” your way to answers to a broad array of problems without ever having put in the work to really learn and understand the subject you are dealing with.
But we have to acknowledge that, good or bad, this is still a powerful tool. The question is, how should socialist societies approach this new tool? And unfortunately (or fortunately, depending on your viewpoint), i think we’re already past the point where we can afford not to use it. Pandora’s box has already been opened and there’s no turning back the clock.
In a way this is a bit like the atom bomb. Yes, it may be dangerous and perhaps humanity would be better off if it had never been invented. But the one thing we can’t do is allow only the enemies of socialism to possess this weapon.
Well said
I don’t see how one can reconcile being anti-technology with being a Marxist. You’d be better served in an anarchist community.
It’s very silly to say that because I don’t like LLMs, that I’m anti-technology.
You are, you’re opposed to automation technology, that’s literally Luddism, which is a form of anti-communism. What positions are you even trying to defend? “I’m not anti-technology, I just oppose automation!” Like, the overwhelming majority of new technology is developed to increase labor productivity, which means to increase the degree in which tasks are automated. To oppose automation is to oppose the overwhelming majority of new technologies.
AI is just one of many automation technologies. You realize USPS is largely ran on AI? Automation is a major backbone to our economy. But, oooh, there’s no “soul” in OCR software or something so we have to go backwards and bring back whole warehouses of people who decipher the text on letters and put them into a computer and can’t have it done automatically because muh AI scawy. We have gotta burn all the huge breakthroughs in medical science such as with protein folding and in material science that were discovered through AI because muh AI scawy and lacks a soul or something. We have to abandon research in nuclear fusion technology because all recent breakthroughs in plasma stabilization have come through AI automation.
Do you know what it means to develop the productive forces? It means to improve productivity, which requires continually improving automation and semi-automation (by that I mean, tools that partially automate things but may still require some supervision). We will never reach a higher stage communist society without automation and semi-automation, i.e. without constantly improving labor productivity.
I hope you never in your life use the speech recognition feature on your phone, like writing text messages by speaking it. I hope you never in your life use a translation app like Google Translate or DeepL. Otherwise you are a hypocrite for using the evil soulless scawy AIs.
I don’t think it’s fair to accuse someone of Luddism, let alone anti-communism, just because they have reservations or skepticism about a new technology, especially one that is already being misused by capitalist interests to harm workers. There also seems to be some disagreement about the terminology, in sense that some of the things you call “AI” someone else might not see as such. So first everyone needs to agree on what “AI” even is.
Of course in a general sense automation has immense potential to benefit us as a species. The question is whether certain aspects of what is now called “AI” really do constitute useful automation, particularly when it comes to generating large amounts of what is essentially garbage content. I think we should be careful making pronouncements this early.
My view is that we need to wait and see how this technology will develop and what impact it will really have on society in the long term. What i am sure about though is that this technology is here to stay whether we like it or not.
I don’t think it’s fair to accuse someone of Luddism, let alone anti-communism, just because they have reservations or skepticism about a new technology,
I appreciate you saying this. Very strange to see someone immediately attack someone else just because I don’t share their enthusiasm
AI is largely used interchangeably with an ANN. Sometimes companies might use it even more broadly than ANN for marketing purposes, but if you actually go take a class in AI at university you will be learning about ANNs.
We used ANNs for research back in my uni days long before the “AI” hype began. If that is all that is meant by “AI” then that is a category so broad as to make any discussion of whether it’s good or bad virtually pointless, because there are so many different shapes that an ANN can take and so many functions they can fulfil that nobody actually knows what it is actually, concretely, that is being debated.
That’s… the point. That’s like, literally the entire point I am making. It makes no sense to be “anti-AI” because AI is such an incredibly broad spectrum of technology. It’s fine to be critical of specific applications of AI (indeed, there are many examples of AI making things worse or even being used for evil) but being “anti-AI” in an absolute sense is an incredibly dogmatic and entirely unreasonable position and I am utterly appalled so many people here are unironically trying to defend it.
Exactly. A lot of the points that
pcalau12i
makes, muddies that distinction in favor of LLMs and gives credit to LLM development when it is in fact a different field that is responsible for those advances
AI is largely used interchangeably with an ANN.
I won’t claim authority on the subject but you are the first person to make this claim, that I have ever read. I do not think this is a commonly accepted viewpoint. At least until a couple years ago it seemed to me that there was an attempt to avoid calling Neural Networks artificial intelligence because of the previous AI hype cycles and winters that occurred
Sure, but no one uses the term ANN. It is basically useless and not even that precise. I agree that calling LLMs AI is pretty misleading. But it is better to call them what they are. If you want umbrella term for LLMs, computer vision, reinforcement learning etc. I would go with machine learning instead of ANN. Even in universities, you won’t learn much about ANN (as in the mathematic model) aside from like first lecture.
There are many approaches in machine learning and some, not all of them, use ANN.
No one uses the term ANN because most people don’t know what it means so it’s not good for marketing, so AI is used in its place, but it refers to the same kind of technology. Machine learning isn’t a good replacement precisely for the reason you say: it is broad and includes things that aren’t ANNs and would not fit under what is generally understood to be AI. If a person bought a piece of tech that said it is powered by AI and used something like a k-means clustering algorithm they probably would feel a bit ripped off and would expect something with an actual AI model that does intelligent processing, they would expect something that could take advantage of an AI accelerator, which is the consumer-end name for a piece of hardware that does AI inferencing, which is specific to ANNs!
It is just undeniably true that when “AI” is used in the overwhelming majority of articles, papers, etc these days people very specifically have ANNs in mind. If you deny this you are just denying factual reality, you are denying that 2+2=4 and that point you are being too unreasonable to carry on the discussion with. I am going to tap out of this discussion as none of y’all are being reasonable in the slightest and stretching to the moon to look for “gotchas” to justify a reactionary anti-technology stance and refusing to listen to someone with background in this field.
The AI Derangement Syndrome mind virus seems to impervious to reason and people will come up with any excuse to justify it. I refuse to engage with this further. Stop replying to me, I do not care to engage further. I do not want to argue with 4 people at once trying to pull out excuses to why it’s somehow evil for China to invest in technology because muh AI scawy. If you are willing to be educated to understand why this technology is important, educated from someone who has a computer science degree and works in this field, then I can teach you, but none of you want to learn and just want to play word games to justify your anti-AI hysteria and I have no interest in engaging with this.
I think it’s far more telling how you conflate automation with Large Language Models (colloquially being called AI even though it’s not).
Much of those technologies that you cite as examples and call AI (OCR, computer vision), I don’t understand why you do that. Those technologies existed long before LLMs.
I find the protein folding example especially perplexing since protein folding simulation existed far, far before LLMs and machine learning, and it is ahistorical to claim those as being AI innovations.
I don’t agree with your AI boosterism, but I think what is more perplexing is how misinformed it is.
They are all artificial neural networks, which is what “AI” typically means… bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.
- ANNs used to accelerate discoveries in the material sciences
- ANNs used in USPS
- ANNs used in making breakthroughs in protein folding
- ANNs used to stabilize plasma in fusion reactors
- ANNs used to coordinate infrastructure megaprojects
The “intelligence” part in artificial intelligence comes from the fact that these algorithms are very loosely based on how what makes biological organisms intelligent: their brains. Artificial neural networks (as they are more accurately called) use large numbers of virtual neurons with different strengths of neural connections between the neurons sometimes called their “weights” and the total number of different connections is referred to as the “parameter” count of the model.
You do a bit of calculus and you can figure out how to take training data to adjust sometimes billions of parameters in an ANN in order to make the artificial neural network spit out more accurate answers given the training data. You repeat this process many times with a lot of data and eventually the ANN will fine-tune itself to find patterns in the dataset and start spitting out better and better answers.
The benefit of ANNs is precisely that they effectively train themselves. Imagine writing a bunch of if/else statements to convert text in an image to written text. It would be impossible because there’s quadrillions of different ways an image can look and have the same text, if it’s taken at a different distance, different writing style, under different lighting conditions, etc. You would be coding for forever and would never solve it. But if you feed an ANN millions of pictures of written text alongside images of that written text under all these different conditions, you can do a bit of calculus with a lot of computational power and what you will spit out is the fine-tuned weights for an ANN that if you pass in a new image it will be able to identify the text.
Technology is fascinating but sadly you seem to have no interest in it and I doubt you will even read this. I only write this for others who may care.
Also, yes, computer vision is also based on ANNs. I have my own AI server with a couple GPUs and one of the tasks I use it for is optical character recognition which requires you to load the AI model onto the GPU for it to run quickly, otherwise it is rather slow (I am using paddleocr). If the image I am doing OCR on is in a different language then I can also pass it through Qwen to translate it. If you ever setup a security system in your home, these often will use AI for object recognition. It’s very inefficient to record footage all the time, but many modern security systems you can tell them to record footage only when they see a moving person, or a moving car. Yes, this is done with AI, you can even buy an “AI hat” for the Raspberry Pi that was developed specifically for computer vision and object identification.
Literally if you ever take a course in AI, one of the first things you learn is OCR, because it’s one of the earliest examples of AI being useful. There is literally a famous dataset with its own Wikipedia page called MNIST because so many people who learn how AI work often first learn to build a simple one that can do OCR on handwritten digits that they are tasked with training on the MNIST dataset.
I’m also surprised your hatred is towards large language models specifically, when usually people who hate AI despise text-to-image models. You do know that “AI art” generators are not LLMs, yes? I find it odd someone would despise LLMs, which actually have a lot of utility like language translation and summarization, over TTIMs, which don’t have much utility at all besides spitting out (sometimes…) pretty pictures. Although, I assume you don’t even know the difference since you seem to not know much about this subject, and I doubt you will even read this far anyways.
bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.
You are toxic, as well as being incredibly arrogant. A true example of Dunning-Krueger effect. If you want to have a tantrum then by all means do so, but don’t pretend that you are on some sort of high ground when you make your pronouncements.
Every conversation you have had with me, you project opinions that I do not have (maxism vs anarchism, calling me a luddite, etc) and construct strawmen arguments that I did not make
Do some self crit
I found YouTube links in your comment. Here are links to the same videos on alternative frontends that protect your privacy:
Link 1:
Link 2:
Robots are dubbed “pearls on the crown of the manufacturing industry.” A country’s achievement in robotics research, development, manufacturing and application is an important yardstick with which to measure its level of scientific and technological innovation and high-end manufacturing…China will be the largest robot market in the world
— Xi Jinping
my food-obsessed brain reading the opening sentence “There’s a city called Hotpot? I bet it has good hotpot”
Ngl this stuff is kind of terrifying. not from a “China bad” perspective, but from just how much this technology is going to change. And how fast it’s happening.
We might be living through an equivalent of the industrial revolution here.
I don’t think so. Until we get actual intelligence (called AGI now) and Fusion power we won’t have a second industrial revolution. Because once the power issue is solved, we only have a resource problem. We’ll see an explosion of industry if humanity can achieve fusion.
I know the meme is that fusion is always 20 years away, but with recent developments and China’s private companies achieving 2/3 critical conditions for extended fusion it really does feel like we’re less than 2 decades out from commercial fusion. Will be world changing, for better or for worse, and I’m excited that we’ll be around to see what it does.
AGI isn’t real, it’s largely a buzzword without a rigorous definition. We will continue to gradually improve the quality of artificially intelligent systems as we improve the hardware and make more progress in understanding intelligence, but there will not be some turning point where there is a sudden explosion in progress from AI when we cross some non-existent AGI threshold. It will just continue to gradually improve over time.
I’m not talking about it coming from current LLM slop. I mean an actual system that is completely new.
A lot of automation can be done without AGI already. We can see automated factories, ports, buses, etc. There are general purpose robots being put to use as seen here. The article discusses how many processes within the government are becoming automated. All of this was human labor before. Just as automation created explosive technological growth in the 19th century, we could see similar kind of thing happen today.
This seems magnitudes worse, at least with the industrial revolution, you could argue that labor wasn’t being fully eliminated, but re-distributed and re-oriented to mass production and factory work. AI is the total ELIMINATION of human labor altogether. Even with other big tech advancements like the internet, it still created work in terms of all the infrastructure that had to be built, the expertise required to maintain and improve it, as well as generally creating many jobs that could not exist without the internet.
AI is the only situation I see where it can completely remove humans from the system, even for the purpose of maintenance and upkeep, it could do that on it’s own. The infrastructure? It already exists. What do we as workers get from this? What’s left to look forward to?
The capitalists can’t automate away labor. That’s the whole fundamental limitation of the capitalist mode of production. The higher your “organic composition of capital”, the lower your profit rates (for the industry as a whole). The organic composition of capital is the ratio of constant capital (buildings, machinery, robots, energy) to variable capital (human wages).
The more the capitalists try to escape having to pay wages through automation (or escape competition through monopoly), the more they dig the graves of their whole class.
In a practical sense as well, China leads the world in robotics because you need a vast government system to produce highly skilled engineers, reliable/cheap utilities and an industrial policy to generate demand for automation.
You can never fully eliminate labor, that goes against the labor theory of value. Also robots cannot grease themselves & computer servers need maintenance. Just as the internet replaced a lot of hard print publishing, helper robots will free up people to work in less automated areas like building infrastructure.
I don’t think we’ll see elimination of all human labour in the near future, what’s much more likely is that human labour is going to be augmented by AIs. Ultimately though, to me the ultimate goal of a communist society is to free people from necessary labour as much as possible, and allow people to pursue their interests and self development. If all our necessities are met by automation, then we can focus on doing whatever we find interesting individually or collectively.
I see the strength of LLMs as something that is for regular people to interact with. Not so much for automation of paperwork in a work setting although that is one application.
E.g. Sometimes older people don’t interact with technology well. They only see buttons and menus with very brief labels on them, which can be daunting. They’re afraid of hitting the wrong thing. Often they don’t submit forms online because they don’t want to make a mistake. With many companies/organisations using online websites as a big part of their customer facing presence, older people get alienated.
An AI that converses to guide them and answer any questions would make technology more accessible.
I see the strength of LLMs as something that is for regular people to interact with. […] E.g. Sometimes older people don’t interact with technology well.
I think this argument is a bit flawed. If the main benefit of LLMs is facilitating the use of technology for the older generation, that, having not grown up as immersed in technology as we are today, is not as well versed in its use, then does this benefit not disappear when that older generation dies out and the new “older generation” will be those of us who have grown up with technology and are thus proficient in it? Why then do we still need this facilitator at that point?
In fact, i would take this line of thinking one step further: What happens when the new younger generation grows up with LLMs constantly facilitating their interfacing with technology? Will they perhaps become dependent on LLMs, having had no necessity to learn how to interact with technology without the LLM interface? Does this not just mean that LLMs will be self-perpetuating the need for their own existence? Is there not a risk that one day the skill to use technology without the crutch of LLMs will be lost altogether?
This is a risk vs reward question: Does the reward of convenience outweigh the risk of atrophy of certain skills? Of course this is basically a rhetorical question because i know what the historical answer of our societies to this question has always been, for any such new technology. It has always ended up being yes. And inevitably, we are going to end up embracing this new technology too, in some form or another, just like we did all the others in the past. That is just the way these things go.
I understand your fears and concerns, but I think you are slightly overreacting. Even with the inventions of better A.I. and robotics, there will still be more jobs created eventually. Supervision and improvement of A.I. and robotics, new industries that previously may not have been possible.
The Chinese government has been very clear, that at least for now, robots/A.I. won’t completely replace human labor or thinking, just supplement it.
Oh my fucking God. Like, mere weeks after releasing AI and the Chinese are already using it to make society better. What have the US and Europe done with it? Deep fake porn, spam, and ruining everything.
well to be fair the USA does use it positively in many ways as well, USPS is largely ran on AI.
It’s pretty incredible to watch how differently this tech is applies in China and the west. It’s such a great illustration of how different sets of social and economic rules impact the development of society as a whole.
I don’t know what to think of this.
I don’t know what to think of AI, dammit.
Just get off the AI derangement syndrome forums that are convincing you to hate tech and realize technology is just a tool which can be good or bad depending upon its application and you do not need to have a generalized opinion on it as a whole. It’s like saying “I don’t know what to think of knives.” It’s just a weird statement. Knives are just knives, you can use them for bad things like stabbing people or good things like cutting up some peppers to go in hot pot. No need to have an opinion on knives in general. Same with AI.
I always thought AI was really cool, like inherently and on the face of it. Generative AI makes you feel like you’re out of Star Trek sometimes. The distaste so many people have with AI comes down to the fact it violates copyright in a nebulous way (lib shit) and that it’s a genuine threat to the livelihood of artists (real shit.) It will be easier to feel optimistic about AI when we can be sure we’re living in an economy that prioritizes lives over profit, because only in that society can generative AI and artists truly live together peacefully.
I think it’s a net positive as long as there’s a human in the loop. The key bit is this in my opinion:
While AI is playing a growing role in government work, officials say it is intended to assist, not replace, human workers — despite referring to such systems as “employees.” Futian’s regulatory framework requires each AI system to be monitored by a designated human supervisor to prevent errors and ensure compliance with ethical standards.
“The guardian of the AI-powered employee is responsible for overseeing its operation, and if any issues arise, the guardian is held accountable,” said Gao.
The AI isn’t the decision maker, it’s an automation tool that allows a human worker to do their job more efficiently, but responsibility still lies with the human.
That’s very cool.