Form of words:
heyOver the past 18 months, humanity has dutifully entered lockdown and back again – offering vast amounts of personal data along the way. Remote work, Zoom schooling and contact-tracing became part of daily life; Even today, in the name of public health, diners in Paris have their digital health passports scanned before opening their menus.
Plus, we’ve been bombarded with evidence about how the algorithms that mess with our data can be wrong. Vaccine misinformation is shared on social networks at lightning speed. Germany’s recent election was Beat by fake-news campaigns. In England, students Speak One after another “F*ck algorithm” was used to grade the test scores. And the authoritarian regime has used the pandemic to increase its grip on digital surveillance systems from China’s “social debt” scoring for face Recognition in Russia.
It is therefore encouraging to see the US and EU pledge to work together to tackle the “algorithmic amplification” of harmful content online, and to ensure that artificial-intelligence systems are “credible” and uphold “democratic values”. share. This was just one of several promises made by the Transatlantic Trade and Technology Council at its inaugural meeting recently.
AI. This kind of cooperation would be needed for melee fighting for the type of dominance Followed by China and Russia, both have been accused of conducting information warfare and cyberattacks on Western targets, while further restricting the freedoms of their citizens at home.
The pledge is also a marker for the Biden administration’s move toward greater accountability and regulation of Big Tech. To be sure, the US – home to the likes of Facebook Inc and Amazon.com Inc – has taken a slower and more scattered approach than the European Union, which has no equal to Silicon Valley and which takes a tougher stance. Could result in “gatekeeper” platforms. (Before the TTC meeting, US officials apparently resonant His frustration with the “too broad” regulation.) But the Federal Trade Commission warning This will penalize companies that use harmful or biased algorithms.
There is a need to ascertain how trustworthy an algorithm is. Enforcing rules against opaque processes is a “million-dollar question” for tech regulators, says Wojciech Wiwirowski, the EU’s top data-protection supervisor.
On both sides of the Atlantic, much hope has been placed in efforts to make AI “interpretable”. But it’s not always enough: UK students angry with their exam-scoring algorithm weren’t reassured Explanation that it was using historical school performance data to standardize the results. Similarly, algorithmic processes are often viewed as closely guarded trade secrets; When the New York Police Department attempted to cancel its contract with Palantir Technologies Inc. and requested copies of its analysis, the software firm refused To provide it in a standardized format.
Read also: We’ve created holograms you can touch – you can even shake your virtual colleague’s hand
Understanding AI Systems
One goal should be rigorous testing of AI systems before they are released into the wild, and repeated audits afterwards. Dr. Carissa Veliz, of the AI Ethics Institute at Oxford University, draws Similarities With randomized controlled trials of medical treatments for approval by the US Food and Drug Administration. So an algorithm for assessing job candidates can be tested in an experiment with a control group and followed later. companies can also be Compelled To share data with independent researchers.
Another area on which the EU and the US must work together is clarifying what happens when algorithms go wrong – and who is to be held responsible. Regulators know that existing product liability rules will seem outdated in a world driven by overlapping technological processes where human decisions take a back seat. When an Uber self-driving car fatally hit a pedestrian in 2018, its human operator was found liable. but it’s chilling The technology classified the pedestrian as another car, then an unidentified object, then a bicycle, in the seconds before the collision. In a future where humans are rejected by algorithms, where exactly will the responsibility be?
On what should there be clarity? United NationsLooks like dependable AI too. We must be prepared to ban technologies that are so dangerous that they should be left untested. For example, many countries want to ban autonomous lethal drones. vivorovsky is as proposed Ban on surveillance-based targeted ads. Social scoring was explicitly called out in TTC.
Human-centered AI has a long way to go. But when the next TTC happens, it will help to know not only what is trustworthy, but also what is not.—bloomberg
Read also: Facebook prioritizes ‘greed’ over children’s mental health, US senator says in hearing
subscribe our channel youtube And Wire
Why is the news media in crisis and how can you fix it?
India needs free, unbiased, non-hyphenated and questionable journalism even more as it is facing many crises.
But the news media itself is in trouble. There have been brutal layoffs and pay-cuts. The best of journalism are shrinking, yielding to raw prime-time spectacle.
ThePrint has the best young journalists, columnists and editors to work for it. Smart and thinking people like you will have to pay a price to maintain this quality of journalism. Whether you live in India or abroad, you can Here.