The Monthly Briefing
A new (non)normality
This week marks one month since the death of George Floyd, an African American man killed by police during an arrest in Minneapolis, Minnesota. The events that have followed both inside and outside the United States, are well known and are already part of the history of the fight against discrimination and social injustice.
This crisis faces us with an uncomfortable truth: inequality is still present in our society and has consequences in many aspects of our lives. This reality can be seen in all areas, including the application and use of technology.
Recently, IBM, Amazon and Microsoft have all reported a slowdown in the implementation of facial recognition technologies, because of the risk of racial bias. One of the main reasons cited for this pause is that these companies are waiting for a national law to be passed which will govern this technology. However, some past lobbying actions show that these companies do not actually want strict rules against facial recognition use. In the case of IBM, the company has announced the interruption of the development or research of this technology for any use that -in their own words- might favour "mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles”. In the case of Amazon, they have executed a one-year moratorium for Rekognition, its police related facial recognition technology.
However, potential risks should not be related to specific technologies, but to their specific use and applications. Citizens do not trust AI or any other technology, but the firms, organisations or services using it. Therefore, it is essential that clear and technology-agnostic governance and risk mitigation requirements are in place. Identifying AI with higher discrimination or safety risks is undermining trust in this technology and AI adoption. It should be stressed that the same technology that can be used for a bad purpose can perfectly well be applied in the opposite direction: to verify that no one is discriminated against and to guarantee fundamental rights.
The debate on the regulation of Artificial Intelligence is currently very present in Europe. The EU regulatory framework is comprehensive and strongly oriented towards the protection of citizens’ rights, making the EU a global benchmark on this issue. Therefore, authorities should focus on providing guidance on how to comply with existing regulations to avoid unfair discrimination and achieve suitable levels of explainability or interpretability, and creating a harmonized framework of regulators and supervisors’ expectations on each AI application depending on their criticality.
Coronavirus is still with us, but now it's traceable
With the start of the "new normality", the main European countries are implementing tracking apps to control contagion and prevent possible outbreaks from getting out of control again. Some countries, such as Germany or Denmark, are opting for a decentralised model, while others, such as France, are collecting data on a central server.
Germany launched its application on June 16, after a cost of development of 20 million euros, and reaching 10 million downloads in just three days. The so-called Corona-Warn-app is based on Apple and Google technology, which has also been adopted by other countries such as Italy with notable success, and guarantees the privacy and anonymization of data. It works with Bluetooth technology, which allows to measure close contacts..
On the opposite side, France advocates the centralized model, that has raised some concerns about the level of data privacy. The French application also uses Bluetooth technology, but, according to Margrethe Vestager, EU Commission vice-president, "it may not be able to connect with others [apps] across the European Union because it stores data centrally". The UK has rejected the implementation of the app it had been developed after detecting technical issues with Bluetooth technology. On June 18, the government admitted that the application was faulty and that it would switch to a model developed by technology giants Apple and Google.
In Spain, the latest news indicates that the app will not be ready until after the summer, when its effectiveness has been proven. The Spanish application will be open source and based on the decentralized model.
+ Why does the term "assured autonomy" pose a dangerous concept? (New York Times)
+ Experience design with ML, soooo well explained. (Apple)
+ United States: the great catalyst of talent in Artificial Intelligence. (Macro Polo)
+ "How clean is your cloud?" (Greenpeace)
+ Focusing on producing data with good quality in the first place, not finding problems and fixing them. (Towards Data Science)
+ References and examples in Data Science (Data Science sin humo)
"In a racist society, it is not enough to be non-racist. We must be anti-racist".
Angela Davis, American political activist, philosopher, and academic.
We like very much the proposal that Emily Hadley makes about this sentence, taking the concept to our professional field: “in a racist society, it is not enough to be non-racist [data scientists]. We must be antiracist [data scientists]”. In her article 5 Steps to Take as an Antiracist Data Scientist (Medium) Hadley tells us what we can do to make a positive impact in our field. “We must confront the ways in which data and algorithms have been used to perpetuate racism, and eliminate racist decisions and algorithms in our own work.”
BBVA DATA GALAXY 🌌
Data-wrapped products are not eaten, but according to an MIT Sloan Management Review article, they can delight customers and increase profitability. The text highlights data products developed by BBVA such as the categorizer and the features incorporated in the BBVA personal finance management app. These products are characterized by embedding data analytics capabilities as a value proposition and to offer much higher impact experiences.
Data wrapping is a distinctive data monetization approach whose main characteristics are: (1) product owners, not IT, lead the product road map, (2) economic returns result from a lift in sales, not from an internal business process improvement, and (3) it’s risky; they could confuse, irritate, offend, or drive away the customers they serve. You can read more about it in this article.
We’re inviting you to take part in our photo contest! Post what DIVERSITY means to you on Instagram using #MakeItVisibleBBVA and you could win one of three amazing Polaroid Snap Touches – deadline July 5th. (Legal terms)
For any question or suggestion, you can also write to firstname.lastname@example.org
You can enjoy much more content related to data science, innovation, new solutions of financial analytics and how we work in our website: bbvadata.com
Let's talk about it. Join the conversation on Linkedin.
© 2020 All rights reserved. BBVA Data & Analytics. Avenida de Manoteras, 44, 28050. Madrid.