AI Researcher calls for controls around the use of AI algorithms: Takeaways from CeBIT Australia 2019
By Chris Cubbage, Executive Editor
INCLUDES PODCASTS: Future of Social Media & Your Personal AI Assistant, and Cyber-Crisis Management
There was a ‘tongue-in cheek’ kick off to CeBIT Australia 2019. The City of Sydney Lord Mayor Clover Moore couldn’t resist taking an opportunity to have a sly swipe at Federal Energy Minister Angus Taylor over their ‘very personal’ climate change debate. Officially opening the event at the ICC Sydney, the Lord Mayor proudly highlighted the City’s work in helping to develop an environmental sustainability platform, commenced in 2015 and which is now being used by 33 councils. “While the federal government talks, we do the work,” she said.
City of Sydney Lord Mayor Clover Moore
Importantly, cybersecurity was the first keynote topic at the largest ‘business technology’ conference held in the southern hemisphere. Mikko Hypponen, Chief Research Officer at F-Secure gave the opening keynote, titled, ‘Cyber Arms Race’. Mikko is widely regarded, having lectured at the universities of Oxford, Stanford, and Cambridge and is curator for the Malware Museum at the Internet Archive.
Mikko affirmed at the outset, “Cybersecurity is not a product, it’s a process” he said. “Criminals don’t want to hack you, they want your money. If you are too difficult to hack, they will move on. Whilst nation state actors are following orders and therefore, will not stop in their mission.” Highlighting the military actors, Mikko points out the clear difference in profile between China and Russia. China has the hardware visibility across the globe, whilst Russia has very limited scale in hardware distribution. Taking this to mean that China has much more reach around the globe, than Russia, and therefore presents as a far greater military cyber threat.
Mikko was able to highlight to a broad technology audience that cybersecurity is a game of ‘cat and mouse’. Turning also to the game of ‘Security Tetris’, where your successes disappear, whilst your failures pile up. “Rarely is anyone thanked for the work they did to prevent the disaster that didn’t happen,” he concluded.
Mikko Hypponen, Chief Research Officer at F-Secure
Call for ethical and transparent development of artificial intelligence
Kriti Sharma, who considers herself “an ethical AI person building AI for good” provided a very insightful keynote on the ethical and transparent development of artificial intelligence and the driving algorithms being developed. Indeed, given Kriti’s research background and application experience, her insight and wisdom in calling for greater control and regulation of AI should be a wake up call for policy makers.
Having built a four-foot robot in her London apartment, Kriti explained how she is training her robot to have long, meaningful conversations; programming it access to the internet and sourcing data and content as the conversation ensues. However, during the process, she became aware of the inherent biases within the data sets. The bias became apparent when local children learnt of the robot and came to visit. Asking it “who is the Prime Minister of the United Kingdom?”, the robot correctly answered, “Boris Johnson is the Prime Minister of the United Kingdom.” Then the children asked – “Who is the President of the United States?”, to which the robot answered – “Donald J Trump is the President of the United States, God help us!”
With this type of direct insight, Kriti explained, “What’s worrying me is how machines are learning.” We are seeing similar patterns across digital behaviour and the decisions being made by AI today. Algorithms are being used all the time to make decisions about who we are and what we want. Algorithms are choosing the data and news people see, routes to be taken and what people should eat. If you are a man or woman, or a particular age or age range, or display a propensity towards purchasing behaviour or activities, then these algorithms will target you directly and will direct more and more content that relates or interacts with your personal criteria or behaviour – and at an ever increasingly granular level; be it your name, colour, location or residence – the AI will know and is trained to target and stick to you.
There is an inadequate proactive culture of understanding towards this technology. Kriti said, “I used to believe self-regulation would be the way but I’m now seeing how the largest companies and come countries are using AI in a way that causes us to need regulation and controls around how it is used and transparency in these systems.” Kriti highlights that these algorithms are making bias decisions. An example is within business applications and making recommendations with bias as to who gets a job interview, who gets a loan or how much insurance you will need to pay. Kriti asserted, “If humans were known to make these decisions, there would be outrage but because algorithms are making them, it is going unnoticed and unchecked.”
We are in control and we can create it the way we want and what values we teach the system, what data it sources. We need to be aware of the bias we are programming into how the systems are applied. Kriti provided her own personal example, discovering that code written by women and released on GitHub was rejected 35 per cent more than code written by men. With this in mind she changed her profile picture to that of a cartoon avatar (cat with a jetpack) and which resulted in an immediate change to the way her code was accepted.
Kriti highlighted a number of socially progressive AI projects being applied around the world and designed to overcome social stigmas and reach-out to people in need. One in three women face domestic violence in South Africa, much similar in Australia, yet only one in seven cases get reported. Undertaking research into why there was such a high under-reporting, found three major drivers; shame and stigma of domestic violence, the judgement and victim blaming and the limited support lines. To overcome these obstacles, Kriti and her team built an AI tool, called Rainbow, designed as a non-judgemental AI and interacted on a hotline. Having conducted over 750,000 consultations, it was found that victims were opening up to the AI, with one in a thousand using the word ‘violence’ and ‘rape’ in these consultations. Another similar example, in India, has involved 1.5 million consultations with an AI overcoming a traditional stigma and taboo around sex and sex education.
Kriti concluded with some of the key rules we should be following as AI continues to advance; AI should reflect the diversity of the users it serves; AI must be held to account and so must users; Reward AI for ‘showing its workings’; AI will replace but it must also create. The importance of these rules is underlined in Kriti’s final statement, “so that when the robots do take over, at least they are nice.”
TAKEAWAY CEBIT PODCAST INTERVIEWS:
PODCAST: #CeBITAus – Future of Social Media & Your AI Personal Assistant – Interview with CeBIT Chairman Stephen Scheeler, former ANZ CEO of Facebook #CeBITAus
Chris Cubbage with Stephen Scheeler, CeBIT Chairman 2019
Stephen is a true visionary on any topic to do with social media, the future of technology, and current technological advances. This discussion delves in the challenges, opportunities and potential dangers of social media, future of work and rapid advances of technology – be it your own personal assistant that also protects you online, through to how your work day is set to change. Enjoy the discussion.
PODCAST: #CeBITAus – Preparing your organisation for a cyber crisis, interview with Wayne Tufek of Cyber Risk
Preparing your organisation for a cyber crisis, interview with Wayne Tufek of Cyber Risk
Great to see Wayne Tufek in action at CeBIT Australia 2019 delivering an Innovation Lab workshop on ‘Preparing your organisation for a cyber crisis – Rewriting the rule book’. We dive into crisis communication and how organisations and government needs to be fully prepared in their response and communication strategy in order to stay on the front foot and drive the ‘cyber-crisis’ narrative.
Wayne Tufek is a contributor to the Australian Cyber Security Magazine, Issue 6.
MySecurity Media were media partners to the event
VIDEO Takeaways: Robotics & Gaming developments
#CeBITaus 2019 with Asia Pacific Security Magazine
Posted by Drastic News on Friday, 8 November 2019