The only way to master VARC during your CAT Preparation is by practicing actual CAT question paper. Practice RCs with detailed video and text solutions from Previous CAT Question Papers.
The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.
In [my book “Searches”], I chronicle how big technology companies have exploited human language for their gain. We let this happen, I argue, because we also benefit somewhat from using the products. It’s a dynamic that makes us complicit in big tech’s accumulation of wealth and power: we’re both victims and beneficiaries. I describe this complicity, but I also enact it, through my own internet archives: my Google searches, my Amazon product reviews and, yes, my ChatGPT dialogues. . . .
People often describe chatbots’ textual output as “bland” or “generic” – the linguistic equivalent of a beige office building. OpenAI’s products are built to “sound like a colleague”, as OpenAI puts it, using language that, coming from a person, would sound “polite”, “empathetic”, “kind”, “rationally optimistic” and “engaging”, among other qualities. OpenAI describes these strategies as helping its products seem “professional” and “approachable”. This appears to be bound up with making us feel safe . . .
Trust is a challenge for artificial intelligence (AI) companies, partly because their products regularly produce falsehoods and reify sexist, racist, US-centric cultural norms. While the companies are working on these problems, they persist: OpenAI found that its latest systems generate errors at a higher rate than its previous system. In the book, I wrote about the inaccuracies and biases and also demonstrated them with the products. When I prompted Microsoft’s Bing Image Creator to produce a picture of engineers and space explorers, it gave me an entirely male cast of characters; when my father asked ChatGPT to edit his writing, it transmuted his perfectly correct Indian English into American English. Those weren’t flukes. Research suggests that both tendencies are widespread.
In my own ChatGPT dialogues, I wanted to enact how the product’s veneer of collegial neutrality could lull us into absorbing false or biased responses without much critical engagement. Over time, ChatGPT seemed to be guiding me to write a more positive book about big tech – including editing my description of OpenAI’s CEO, Sam Altman, to call him “a visionary and a pragmatist”. I’m not aware of research on whether ChatGPT tends to favor big tech, OpenAI or Altman, and I can only guess why it seemed that way in our conversation. OpenAI explicitly states that its products shouldn’t attempt to influence users’ thinking. When I asked ChatGPT about some of the issues, it blamed biases in its training data – though I suspect my arguably leading questions played a role too. When I queried ChatGPT about its rhetoric, it responded: “The way I communicate is designed to foster trust and confidence in my responses, which can be both helpful and potentially misleading.”. . .
OpenAI has its own goals, of course. Among them, it emphasizes wanting to build AI that “benefits all of humanity”. But while the company is controlled by a non-profit with that mission, its funders still seek a return on their investment. That will presumably require getting people using products such as ChatGPT even more than they already are – a goal that is easier to accomplish if people see those products as trustworthy collaborators.
Question 16 : All of the following statements from the passage affirm the disjunct between the claims about AI made by tech companies and what AI actually does EXCEPT:
Consider each option in order. Option 1 says the product's 'veneer of collegial neutrality' lulls us into absorbing biased or false responses. This clearly relates to a disconnect between what AI claims and what it does.
Now, consider option 2. The passage says AI's outputs are built to be generic, professional and trust-worthy. Option 2 exposes a sexist bias in the AI response. This option too, therefore, relates to a disconnect between claims and reality.
Option 3 says we are both victims and beneficiaries of big tech. In the last paragraph, the passage talks about OpenAI seeking to build AI that benefits humanity. Yet we are victims of big tech too. Option 3 also affirms the disjunct between the claims about AI made by tech companies and what AI actually does.
Consider option 4. The author says he is not aware of research on whether AI favors big tech and only guesses why it seemed that way. So there is no specific claim made as to AI bias here and hence no disjunct as such. Option 4 is the answer choice we are looking for.
The question is " All of the following statements from the passage affirm the disjunct between the claims about AI made by tech companies and what AI actually does EXCEPT: "
Choice 4 is the correct answer.
Copyrights ยฉ All Rights Reserved by 2IIM.com - A Fermat Education Initiative.
Privacy Policy | Terms & Conditions
CATยฎ (Common Admission Test) is a registered trademark of the Indian
Institutes of Management. This website is not endorsed or approved by IIMs.
2IIM Online CAT Coaching
A Fermat Education Initiative,
19/43, Chakrapani St,
Sathya Garden, Saligramam, Chennai 600 093
Mobile: (91) 99626 48484 / 94459
38484
WhatsApp: WhatsApp Now
Email: info@2iim.com