The promises and perils of AI that’s ‘smarter’ than us

the promises and perils of artificial intelligence (AI)

Working with artificial intelligence (AI) has positives and negatives, but how we approach it will determine which side is felt more, says inventor and futurist Ray Kurzweil.

Ray Kurzweil is arguably one of the great inventors of his time. During his teen years in the 1960s, he developed pattern recognition software that could write music in the vein of famous composers.

In 1976, it was the Kurzweil Reading Machine, which used optical character recognition for any font, charge coupled device flatbed scanner, and a text-to-voice synthesiser (all three new inventions) to allow the blind to enjoy printed material.

This was followed by the first synthesiser able to emulate orchestral instruments, all of which earned Kurzweil entry into the National Inventors Hall of Fame in 2002. His current role is Director of Engineering at Google.

At age 70, he has earned a reputation as a formidable futurist as well as an inventor (and author of multiple best-sellers).

“In order to create technology, you have to be able to anticipate the future; you have to know where technology is going,” he told an Academy of Achievement audience in 2000.

“There’s no point solving problems that wouldn’t be problems and wouldn’t have applications by the time the project got finished.”

At the Australian Engineering Conference, Kurzweil will deliver the keynote ‘Thinking Machines – the Promise and the Peril’, discussing the vast implications of progress in artificial intelligence.  

Ray Kurzweil at South by Southwest in 2017.

The possibility of AI ‘smarter’ than us offers both promise and peril. The promise involves better tools to combat issues such as disease, poverty and global warming. The peril is something that might escape our control, whatever that looks like. We’ll be okay though, believes Kurzweil, if we can approach the issue the way we did biotechnology.

“A meeting called the Asilomar Conference on Recombinant DNA was organised in 1975 to assess its potential dangers and devise a strategy to keep the field safe,” he wrote in 2014, adding that the resulting guidelines had been adopted and no significant problems had emerged since.

“We have the opportunity in the decades ahead to make major strides in addressing the grand challenges of humanity. AI will be the pivotal technology in achieving this progress. We have a moral imperative to realise this promise while controlling the peril.”   

Ray Kurzweil will deliver a keynote address at this year’s Australian Engineering Conference 17-19 September in Sydney. To learn more and to register, click here.

We are in the middle of an explosion in drone capability, says expert


Expert Dr Catherine Ball says the way we currently use drones is only scratching the surface of what they can do for us.

When luxury Italian fashion house Dolce & Gabbana sent a bunch of drones down the runway of its Milan show in February this year, the fashion world was aghast. Flying robots carrying the brand’s jewel-encrusted leather handbags?

While millennial fashion models might have felt cause for concern, across the globe in Brisbane, environmental scientist Catherine Ball was celebrating the unexpected use of drones.

“It got me very excited,” she said.

“Here was a collaboration between fashion design, creativity and drones used in a scenario that you would never have thought of.”

A world leader in drone research, Ball is fascinated by the future of the technology, which stretches well beyond the world of fashion. She will be sharing her insights in a panel discussion at this year’s Australian Engineering Conference (AEC).

The production of drones for personal and commercial use is growing rapidly. Data from research and advisory company Gartner predicts the global drone market will be valued at $US11.2 billion by 2020.

Ball has delivered a number of world-firsts in environmental and infrastructure surveying using drones, monitoring bushfires, coral reefs, and collecting a range of data to assist in effective ecological and engineering processes.

She sees drones as an efficient tool for translating patterns of nature in remote locations. Her breakthrough project occurred in 2014 while working with engineering and environmental consultancy URS (now AECOM) when she flew human-sized drones to track turtle habitats along the coast of Western Australia.

They spotted an endangered oceanic manta ray species not seen in many years, and she was promoted to regional lead for unmanned aerial systems. She was also awarded Telstra Woman of the Year (Corporate) in 2015 for her work in drone research.

Ball has a PhD in microbial ecology and is currently the managing director of Elemental Strategy, where she consults to government and private industry on the adoption of technology, such as drones. Ball also works with She Flies, which promotes gender equality in science, technology, engineering, the arts and maths (STEAM) careers.

Drones to the rescue

A love of nature was fostered while watching David Attenborough documentaries as a child. However, Ball’s initial interest in the environment was sparked at the age of five when she saw a television program about the 1984 famine in Ethiopia.

“It’s one of my earliest memories,” she said.

“I remember being quite shocked by it.”

Ball said the current application of drone technology represents the mere tip of the iceberg.

“The idea that we can use drones to get to places and people faster to save lives is really floating my boat,” she said.

“I am excited about how drones are developing outside of the data collection aspect. Some of the humanitarian projects that really excite me involve delivery of blood or drugs or transporting a human organ when needed through a congested city via drones.

“It’s happening in parts of Africa already, and I feel like it should also be happening here in Australia. It’s something we’re looking at, particularly in Queensland. How can we support our remote and regional communities with this kind of technology? Australia is probably the best place in the world to fly a drone.

“We were the first to have commercial drone legislation in 2002, and a lot of other countries look at the Australian model in terms of space regulation. It’s the best place in the world to be a drone scientist.”

Ball also sees drones having an impact on future engineering processes such as data visualisation.

“You can process the data, put it in 3D, stamp the different spectral signatures and walk around inside a virtual reality system without actually having to be on location,” she said.

“The high-resolution nature of the data means that you can look for things in ways that you wouldn’t normally and visit places without having to set foot in them.”

More than selfies

Ball is also inspired by the rise of wearable drones. Gartner’s research shows the personal drone market grew by an estimated 34.3 per cent in the 12 months to February 2017, and Ball believes these pocket-sized machines will be increasingly valuable in gathering vital data.

“Imagine having a little wearable selfie drone and you come across something that’s broken or an accident has happened or there’s a flood level that needs to be checked,” she said.

“Yes, there might be some rather silly uses for wearable drones, but having a geotagged photograph to help somebody in an incident is also something worth thinking about. I always like to take it back to a genuine humanitarian opportunity.”

Dr Catherine Ball will be a speaker at the Australian Engineering Conference 17-19 September in Sydney. To learn more and to register, click here

It’s time to change the way we think about AI and robotics

AI and robotics

When we think about artificial intelligence (AI) and robotics, it’s usually “us versus them”, says Dr Catherine Ball. But according to her, that’s not the case at all.

The rise of things like wearable and implantable technologies mean people need to rethink their relationship with AI and robotics. What’s needed for this to happen?

We asked Dr Catherine Ball ahead of her appearance at the upcoming Australian Engineering Conference to explain what the future looks like when AI and people work, live and play together.

There will be a panel discussion about human-AI interactions and the ethical implications at the upcoming Australian Engineering Conference. To register, click here


What happens to infrastructure design when AI has a say?

AI and infrastructure design

What happens to design and consultancy in a world in which computers are doing most of our thinking? At the upcoming Australian Engineering Conference, Tim Chapman will fill us in.

Up until recently, computers and digital assistants have simply done what humans have told them to do. Even apparently complex and ‘intelligent’ computer systems have only been able to carry out tasks that they have been pre-programmed for.

For example, online insurance systems have offered immediate quotes. Collaborative robots have helped with tasks on factory floors. Doctors have utilised technology to conduct surgery more safely and with greater precision.

But today, computers are beginning to think and learn and make their own decisions without a human programmer telling them what they should decide, said Tim Chapman, Leader Infrastructure London Group with Arup. It’s a brave, new world, indeed.

“For instance, think of how a doctor might spot moles, and how they might describe the factors that would lead to them thinking that a mole might be cancerous,” Chapman said.

“Perhaps it’s pink around the edges, or rough, or it has a changed shape or begun to bleed. Nowadays you could show a computer 10,000 moles, with a description of whether they went on to become malignant or not. Then you could leave that computer to work out the characteristics of each one by various methods, and trust it to recognise moles that are likely to become cancerous in the future better than the best doctor can.”


Big data


What does all of this have to do with engineering? Plenty, Chapman said. As engineers are people who have been trained to conduct complicated calculations in order to build safe structures, their jobs will likely be enormously altered by artificial intelligence. In fact in some cases they already have been.

“Right now a graduate experiences a series of reasonably mundane tasks in order to pick up their trade,” Chapman said.

“The graduates that we employ do work that is very similar to what I used to do many years ago when I was in their shoes. I didn’t have a computer on my desk when I was at their stage, but actually the tasks are fairly similar and their rate of acquiring knowledge is very similar to what mine was. But because of the automation of various tasks they now sometimes can’t learn the basics of the trade, how things really work. This potentially means they’ll become less capable professionals.

“The really big question is, how does a graduate learn their trade at all in five years’ time, when digital has really kicked in?”

Certain types of engineers will see their roles being automated sooner than others, he said. This particularly applies to roles that are less multi-disciplinary, such as the purer forms of structural engineering, which can be more easily automated or “digitised away”.

Other parts of the profession that are highly multi-disciplinary, such as station design, will be far more difficult to replicate with computers and robots because of the sheer number of processes that must be balanced, managed and optimised.

“Crudely, if you’re already working or being trained, you’re probably okay,” Chapman said.

“If you become a graduate in three or four years’ time, you might not be. And in some fields, such as transport planning, things have already changed hugely. That’s a profession where the prolific nature of Google data enables people to draw much better conclusions than you’ve been able to historically draw about how roads work. Therefore it is already having an enormous influence over how roads are designed, the capacity of networks and where improvements are required, etc.

“If I was a transport planner halfway through my career, having been immersed in the old world, but I actually have 20 more years of the new world ahead of me, I might be very afraid right now unless I was fast learning about the new world and all its magic.”

It’s all very dystopian, but Chapman said what we’re actually seeing is a natural push towards utilising technology to become more efficient and productive, as is occurring in every other industry. Some firms will win and some will lose, but the entire industry will be transformed in ways that people haven’t previously considered.

Society itself will be transformed. It is these all-encompassing industrial and societal changes that Chapman would like to highlight to his audience at AEC 2018, and specifically around how engineering design, construction and consultancy could change.

Of course, there is also positive news. Society will be offered better solutions more cheaply across all industries. Entirely new types of businesses will enter the engineering realm, as they have done in retail (Amazon), accommodation (Airbnb), taxis (Uber) and insurance (Friendsurance). These businesses will offer engineers career opportunities they’d never previously imagined.

“There are a whole lot of new providers of data coming in who are unconnected with the old ways,” Chapman said.

“They might not have the same level of skill in terms of how things are engineered, but they have a huge amount of skill in terms of how data gets managed and applied.”


Slippery slope


Chapman offers an example from geotechnical engineering, his original line of work.

“In order to work out how stable a railway embankment is, I’d dig a whole lot of boreholes along the embankment to find out exactly what it’s made out of,” he said.

“I’d test samples from those bore holes. I’d do complicated analyses to work out the most critical slip circle by which the embankment might fall down, etc. But in the future, somebody might buy data sets that could show, for instance, whether the ground is wet or dry. That will have a correlation to stability.”

“Some of the data might show the slope angle using laser or LIDAR scans of the surface. It could show whether the embankment has moved by three centimetres, or not, over the previous five years. This could be done with data sets and without any engineering knowledge.

“On a Friday afternoon, a job that would have taken me six months and $200,000 to do traditionally, could be started and finished – and possibly even more reliably than it would have been done before!”

In terms of engineering design, consultancy and construction, humans will always be relevant, Chapman said. But their relevance will change enormously. Those who are prepared for such changes, including individuals and organisations, will reap massive rewards.

Tim Chapman is a keynote speaker at the upcoming Australian Engineering Conference, which will focus on AI, robotics and the future of engineering. To register, click here.  

With autonomous robots on the rise, what do engineers need to know?

autonomous robots

As collaborative robots give way to autonomous ones, the future is not as frightening as you might think, says Professor Elizabeth Croft, presenter at the Australian Engineering Conference 2018.

When her daughter came home with a textbook that said robots are designed by ‘scientists’, Professor Elizabeth Croft was very surprised. Most of the driving force behind robot technology and capability is coming from engineers, she says.

“I had a bit of a fit when I saw what the textbook said. I told my daughter, ‘No, actually, engineering is pushing the forefronts of robotics. Science, art and design all contribute and help us to think about it, but the engineering part is what allows us to continue to innovate,” said Croft, dean of the faculty of engineering at Monash University.

When Croft talks about the future of robotics, she’s not discussing the manned ‘collaborative’ machines that, for instance, help people on an assembly line to lift engine blocks into car bodies and that switch off when their operator is absent. She means fully autonomous robots.

“Collaborative robots, or ‘cobots’, were passive in the sense that they would not act unless the operator put motive force into them,” she said. They were very safe because they were not autonomous. If the operator did not touch the cobot’s controls, it would stop.

“Where we’ve moved is to a place where now we have autonomous robots that are independent agents, such as delivery robots, robots operating as assistants, etc.,” she said.

“This is the area that I focus on: robots that bring you something. Maybe they hand you a tool. Maybe they carry out parts of an operation that are common in a workplace. We’re interested in collaborating with those agents.”

These autonomous robots are different from cobots, Croft said, because they have their own agenda and their own intent. They are not tele-operated, and they are not activated or deactivated. They have their own jobs, just like people in the workplace. They need no permission to operate.

It’s in this area that Croft works, in the space where rules of engagement have to be figured out. Several major issues are slowing things down right now, such as questions around liability and safety frameworks. Also, how does the front-end work, or how do humans interact with the robot? How do they tell it what they want it to do? If voice operation is key, then we’re clearly not there yet, judging by the voice interactions with our smartphones.

“We are the ones who first see the potential impacts. If we don’t prepare our people for that, we’ll see unintended consequences of the technology.”

And what about social and ethical impacts of technology in society? These are powerful, autonomous systems that are being developed, so how and where should boundaries be drawn to ensure Skynet doesn’t send a cyborg assassin to kill Sarah Connor?

“The underlying programming and bounding of how much autonomy those systems have really impacts what consequences can happen,” Croft said.

“So, it is very important that students of this technology think about ethical frameworks in the context of programming frameworks. Ethics must underlie the basic design and concepts around how an autonomous system operates. That needs to be part of the fundamental coding, part of the training of an engineer.”


Reducing complication


In order to tone down the Terminator imagery, Croft offers an example of how an autonomous robot might change workflow for the better.

When you buy a piece of furniture from IKEA, the instructions contain a small picture of a man and look friendly, but they’re actually quite complicated. There are numerous pieces, many just a little bit different to each other. Some are very small, some are very large, some are flexible. The assembly requires dexterity and making of choices about what must be done in what order. Constant close inspection is a must because of the numerous dependencies.

Elizabeth Croft, expert on collaborative and autonomous robots

Professor Elizabeth Croft.

“This job cannot be fully automated because it’s too expensive,” she said.

“But there are parts of that operation where it would make a lot of sense to have more automation or assistance involved.”

Such technology is very close to reality right now, but we don’t have the legal and other frameworks to make it fully operational.

“We’ve come to a place where people can grab onto a robot, move it around, show it an operation, then press a button and the robot does it,” Croft said.

“But because of legal issues, liability and occupational health and safety, there are risks that need to be managed. There are issues around getting the person and the robot to come together in a workspace in a safe way. Who’s responsible? When the operator is always in charge, then there’s no doubt. But when the operator has no longer got their hand on the big red button, then there is risk.”

Who assumes that risk? In Europe, Croft said, the risk is assumed mainly by the manufacturer of the robot, which creates a challenge for innovation. In North America, the risk is often assumed by the person or company that owns the robot. In other jurisdictions, the risk could be assumed by the worker who is using the robot.


Swapping robots with humans


Outside of the legal framework, the biggest issue is actually the workflow itself. On a typical production line for instance, if one worker can’t do a job, another is brought in to take their place. People are quickly interchangeable. The same needs to be true of a robot being replaced by a human. If the robot breaks down, the business can’t stop operating. So, humans and robots must be easily swapped in and out.

There also needs to be a clear understanding of the value being offered by the robot, to ensure the worker is comfortable to work with the robot. And the worker must feel that the robot understands what they do, too.

“It will become a greater and greater requirement for educators of people working in software engineering or computer engineering to create a real understanding of the impacts  – ethically, socially, environmentally – of the designs they create,” Croft said.

“We’ll need professionals interested in public policy and engineers with a strong ethical framework. The engineers are creating the future of technology. We are the ones who first see the potential impacts. If we don’t prepare our people for that, we’ll see unintended consequences of the technology.”

What kinds of technology will engineers need to use in 10 years? 20 years? 50 years? Elizabeth Croft will be part of a panel discussion about how engineers will partner with technology in the future at the upcoming Australian Engineering Conference. To register, click here

It’s time to set the record straight about artificial intelligence

artificial intelligence (AI)

Artificial intelligence will be able to do many things – destroying the world won’t be one of them, says Professor Toby Walsh.

In the 2013 movie Her, a lonely man called Theodore (Joaquin Phoenix) falls in love with his new operating system Samantha (Scarlett Johansson). Critically acclaimed, the movie won an Academy Award for Best Original Screenplay and was nominated for Best Picture.

However, the acclaim wasn’t limited to the arts community. According to one of Australia’s top artificial intelligence (AI) experts, Toby Walsh, the film resonated with his community too.

“Unfortunately, if you ask AI researchers which AI movie they like, they complain that most of them paint such a dystopian picture of what AI’s going to do to the planet,” he said.

“One that I like, and many of my colleagues have said they like as well, is the movie Her which is not a very dystopian picture at all, and gets something very right, which is that AI is the operating system of the future.”

Walsh said the way we interact with computers has evolved from plugging wires into the front panel of the computer, to machine code programming, MS-DOS with its command line interface, and ultimately the graphical user interface we are all used to today.

“The next layer is going to be this conversational one. You already see the beginnings of that in systems like Siri and Cortana,” he said.

artificial intelligence

Toby Walsh with the collaborative industrial robot Baxter. (Photo: Grant Turner/UNSW)

“As we move more to the Internet of Things, your house is full of devices that are connected to the internet that don’t have screens or keyboards. The front door, the light switch, the fridge, all of these are going to be networked together. There’s only one interface you can have with these, which is voice interface.

“You’ll have this ongoing conversation that follows you around, and authenticates you on the biometrics of your voice. It will learn everything about you and your preferences. It will be very much like the movie. People will get quite attached to this person they’re having the conversation with all the time.”

He said it’s hard to think of an area that artificial intelligence is not going to touch in some way.

“It’s going to touch education, it’s going to touch healthcare, it’s going to touch pretty much every form of business you could imagine,” he said.

“Anything cognitive that we do, you can imagine it touching. It’s hard to begin to think about what it won’t change.”


Next move


Walsh said there are a lot of misconceptions out there about what artificial intelligence is able to do.

“If you summed up all the things that you read in the newspapers, then you’d imagine it’s only a matter of moments before the machines are going to be taking over, which is far from the truth,” he said.

“There are still a lot of significant hurdles to overcome before we can actually make machines as intelligent as us, and likely more intelligent than us. We recently saw the announcement of AlphaGo Zero, where they just gave it the rules of the game Go and it learned everything from scratch in just three days, then beat the program that beat Lee Sedol (World Go champion) 100-0.

“That was pretty impressive. But we still build only narrow intelligence, programs that can do one task. We have made almost no progress on this idea of artificial general intelligence, programs that can match the breadth of abilities of the human brain.”

He suspects it will be at least 50 years before we will get to machines that will be as intelligent as us and possibly longer.

“I’m still hopeful it might happen in my lifetime, that would be a nice achievement. It’s not impossible but it could easily not happen for 100 years, or 200 years. One should always have a healthy respect for the human brain. It is the largest, most complex system we’ve seen in the universe by orders of magnitude, nothing approaches the complexity of the billions of neurons and the trillions of connections the human brain has, nothing!”


The awakening


Walsh was born in southeastern England, just outside London, and confesses that as a boy he read too much science fiction.

“From about the age of seven or eight I started to read about robots and intelligent machines,” he said.

“Maybe I didn’t have any imagination, but it’s what I decided I wanted to do in life – try and build those things that I read about. The more I thought about the problem as I got older and could understand a bit more about it, I realised it was actually one of those challenging problems that wasn’t going to go away anytime soon, like how did the universe come into existence?”

After studying maths and physics at Cambridge University, he did his PhD in artificial intelligence at the University of Edinburgh. There he met an Australian philosophy professor who invited him to Canberra to teach at a summer school each year for the next ten years or so.

“I would come out for a couple of weeks or a month in the middle of December and January, and escape the British winter,” he said.

“I learnt to love Australia in that time.”

Eventually, he landed a permanent position at National ICT Australia (NICTA) now part of the CSIRO’s data innovation group, Data61, and the University of NSW where he is Scientia Professor of Artificial Intelligence.

He is particularly interested in the interface between distributed optimisation, social choice, game theory and machine learning and believes now is probably the most exciting time to be an AI researcher.

“I started as a postgraduate researcher at what was the tail end of the AI boom, the expert system boom,” he said.

“It was actually already on the downswing at that point. Then it was what was called the AI winter. We’re definitely in spring, if not summer by now. It’s a very exciting time. You can’t open the newspaper and not read several AI stories.”

Of course, this increasing interest opens the door to misinformation being spread about AI as well. So, last year Walsh decided he “had a duty” to write his own definitive guide to the field: It’s Alive! Artificial intelligence from the logic piano to killer robots.


It’s Alive!


One big question, which takes up a large chunk of Walsh’s book, is what will happen to human jobs in the future if many tasks can be performed better by machines?

“We don’t really know the answer to this,” he said.

“Lots of new jobs will be created by technology, that’s always been the case. Most of us used to work out in the fields, farming. Now just three per cent of the world’s population is involved in farming. Lots of jobs were created in office and factories that didn’t exist before the industrial revolution.”

However, he acknowledged there is a chance it could be different this time around.

“Previously when our brawn was replaced we still had a cognitive advantage over the machines,” he said.

“If we don’t have a cognitive advantage over the machines, what is the edge that humans have? We have social intelligence, emotional intelligence that machines don’t have. We have creativity. Machines are not as adaptable as humans yet. It could be the case that we end up with fewer people employed than before. That is possible. One thing is absolutely certain, that there will be jobs displaced and new jobs will be created. And the new jobs will require different skills to the old jobs.”

He said the caring professions, artistic professions and scientific professions should all survive, professions where there is no natural limit to the potential of the job, unlike say ploughing fields or assembling widgets, repetitive tasks that could be done by robots and then the humans are no longer needed in that role.

Interestingly, he feels some ancient jobs will grow in stature while some newer jobs might be very short-lived.

“One of the newest jobs on the planet is being an Uber driver. But Uber are already trialling autonomous taxis. The driver is the most expensive thing in the Uber. It’s clearly part of their business plan to get rid of them as quickly as possible. That’s probably one of the first jobs that’s going to completely disappear,” he said.

“Whereas, one of the oldest jobs on the planet, with a very venerable history, is a carpenter, that is probably going to be one of the safest in the sense that hand carved objects are going to be increasingly valued. We’ll appreciate those things where we can see the touch of the human hand, and if we believe economists, their value will increase.

“In fact, if you look at hipster culture today, you can already see the beginnings of that: craft beers, artisan cheese, and hand-baked bread. It seems to me that there might be some beautiful symmetry, where we’ll actually all end up doing the jobs that we used to do 500 years ago when we were craft people.”

artificial intelligence

Toby Walsh with a Meccano robot he and his daughter assembled.


This is where the choices he mentioned previously come into play again.

“We need to think about how we might need to change education so that people are educated for whatever the new jobs are; whether we’re going to have more free time; whether income is going to be distributed well enough,” he said.

“We seem to be suffering from an increase in inequality within society and technology may amplify that. That’s certainly a worrying trend.”

Another area for discussion is how far we want AI to evolve. Do we want it to get to consciousness and what would the consequences of that be?

“Supposing machines become intelligent, but not conscious, then we wouldn’t have to be troubled, if for example, we turn them off or we make them do the most terrible, repetitive, dangerous, or other activities that we wouldn’t ask a human to do,” he said.

“So we could be saved from some difficult ethical quandaries. Whereas, if they are conscious, maybe they could be thought of as suffering in that respect, then maybe we’ll have to give them rights, so we’ll have to worry about these things. It could be useful if they’re not conscious.”


Killer robots


Walsh said there are issues regarding the use of artificial intelligence where we should be concerned. Most notable is its use by the military.

In 2015, he coordinated an open letter to the United Nations signed by more than 1000 leading researchers in artificial intelligence and robotics including Apple co-founder Steve Wozniak and Tesla CEO Elon Musk as well as other luminaries such as physicist Stephen Hawking and philosopher Noam Chomsky. The letter called for a universal ban on the use of lethal autonomous weapons.

“Certainly today machines are not morally capable of following international humanitarian law,” he said.

“Even if we could build machines that were able to make the right moral distinctions, there are lots of technical reasons in terms of industrialising warfare, changing the scale at which you can fight warfare that would suggest to me that it would be a very bad road to go down.”

He said the world has agreed in the past to ban certain nuclear, chemical and biological weapons after seeing the horrific impact they can cause. And they also preemptively banned blinding lasers after realising the potential horror.

artificial intelligence

Playing around in the UNSW robotics lab.


His activism on the issue has seen him invited to the United Nations in both New York and Geneva to argue the case for a ban on autonomous weapons.

“It’s very surreal to find oneself in such an auditorium having conversations with ambassadors,” he said.

“It’s also gratifying how flat the world is. I had a meeting with the Under Secretary General, who’s the number two in the United Nations. He was asking my opinion about autonomous weapons. It’s been a very interesting ongoing journey, in fact.”

It has also opened his eyes to the reality of international diplomacy and how difficult it can be to get things done.

“Pleasingly they have gone from the issue first being raised less than five years ago, to three years of informal discussions, and now last year they voted unanimously to begin formal discussions, what’s called a group of governmental experts,” he said.

“I’m told, for the United Nations, that is lightning speed. But this is very slow from a practical perspective as the technology is advancing very rapidly.”

He said they warned a couple of years ago in their open letter that there would be an arms race. Now, the arms race has begun with prototype weapons being developed by militaries around the world in every sphere of the battle, in the air, on the sea, under the oceans, and on the land.

“There’s plenty of money to be made out of selling the next type of weapon to people. There’s a lot of economic and military pressure. You can see why the military would be keen to have assistive technologies,” he said.

And he acknowledged there are some arguments for autonomous weapons.

“You can see, certainly from an operational point of view, there are some obvious attractions to getting soldiers out of the battlefield, and having weapons that follow orders very precisely, weapons with super-human speed and reflexes, weapons that will fight 24/7, weapons that you can risk on the riskiest of operations, that you don’t have to worry about evacuating from the battlefield when they’re damaged,” he said.

“It’s not completely black and it’s not completely white. But I think the weight of evidence is strongly against having autonomous weapons.”

However, it is ethical questions such as this that make working in the field so interesting.

“It is like the famous Chinese curse, ‘May you live in interesting times’,” he said.

“It’s a very interesting time, because we’re starting to realise if we do succeed, then we have to worry about exactly how we use the technology. How do we make sure it doesn’t get misused? It’s a morally neutral technology, it can be used for good or for bad. We have to make the right choices so that it gets used for good.”

AI, robotics and the future of engineering is a key theme at this year’s Australian Engineering Conference. To register, click here

Meet Sophia, the humanoid robot that has the world talking

Sophia the humanoid robot is sure to turn heads when she makes an appearance in Sydney this September.


How much longer until we get machines with human-level capabilities? Discussions about time-frames and consequences can occasionally get heated, and nobody knows any proper answers, but every now and then an expert will take a punt.

In February Dr David Hanson, founder of Hanson Robotics, told the World Congress on Information Technology in Hyderabad that robots would be “alive and have full consciousness in five years” according to India’s The Economic Times.

Hanson is a PhD in interactive arts and engineering and a former Disney sculptor and researcher at the company’s Imagineering Lab. His Hong Kong-based robotics firm is the maker of lifelike, humanoid robots, most famously ‘Sophia’, who has made appearances on talk shows and conference stages around the world.

Despite progress by Hanson and others, not everybody believes truly convincing humanoid robots will be here in the near-term future.

“Part of the challenge is the ‘uncanny valley’, robots that are even close to humans in appearance and behaviour look eerie,” said UNSW Professor of Artificial Intelligence Toby Walsh.

“But AI has made great advances in the past few years, driven by more computer power, more data and advances in algorithms like deep learning.”

Sophia’s dialogue uses a basic decision tree, like a chatbot, integrated with other AI features for tasks such as governing expression and emotion recognition. Last year, Hanson’s Chief Scientist, Ben Goetzel, told Humanity+magazine that Apple’s Siri would probably be the nearest match to the company’s dialogue system, which also “seems to be a sort of complex decision graph, which on the back end can draw on a variety of different sources”.

He acknowledged it is not artificial general intelligence (AGI) but told The Verge, it is “absolutely cutting-edge in terms of dynamic integration of perception, action, and dialogue”.

Sophia is arguably an impressive feat of several different engineering disciplines, right down to her patented ‘Frubber’ emulating human skin, and described as a “spongy elastomer using lipid-bilayer nanotech, self-assembled into humanlike cellwalls”.

For whatever her advancements or shortcomings, Sophia and other androids made by Hanson Robotics are helping drive a necessary conversation about what should and shouldn’t be built, and why or why not. The ethics around recognising human qualities in robots were front and centre after Saudi Arabia, as a publicity stunt, granted citizenship to Sophia last October.

Sophia will be appearing at this year’s Australian Engineering Conference to discuss robot rights.

If a robot hurts or kills someone, is the robot responsible?

As robotics and AI become an increasingly potent force in society, previously abstract questions about how we should legislate them now need concrete answers.


As self-driving vehicles take to the roads, and organisations and governments continue to invest in collaborative robots to work autonomously alongside humans, we need to decide who should be responsible for the decisions that robots make.

An attempt to address this conundrum was made last year by the European Parliament, which passed a resolution suggesting robots be granted ‘legal status’. But recently, members of the European Council (responsible for defining the EU’s overall political agenda) and others have penned an open letter on the subject.

It strongly cautioned against granting robots legal rights, and suggested that proponents of legal status for robots might have ulterior motives for laying responsibility at the feet of machines, rather than their manufacturers.

What is the resolution?


The resolution was passed last year, when the European Parliament voted to grant legal status to ‘electronic persons’. Drafted by MEP and Vice-Chair of the European Parliament’s legal affairs committee Mady Delvaux, the resolution aimed to create a set of unified laws to prepare European countries for the entry of AI and robotics in everyday activities, and address concerns that autonomous machines might cause harm to their human counterparts.

Lawmakers called for legal recognition of robots as a way to hold them accountable for damage they might cause, particularly to clarify liability laws surrounding self-driving cars.

“At least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart, autonomous decisions or otherwise interact with third parties independently,” the resolution stated.

Proponents of the resolution have been quick to clarify the legal status of robots would be similar to laws that give businesses legal rights of individuals, allowing them to sign contracts or be sued, but would not give them human rights.

Science fiction versus fact


The open letter, which was been signed by AI thought leaders, experts and CEOs around the world, raised a number of concerns its signatories have about the resolution.

Firstly, the resolution speculates that robots might have the autonomy to make complex choices and even make mistakes – an assumption that drastically overestimates their abilities.

“From a technical perspective, this statement offers many biases based on an overvaluation of the actual capabilities of even the most advanced robots,” the letter stated, including “a superficial understanding of unpredictability and self-learning capacities, and a ‘robot’ perception distorted by science-fiction and a few recent sensational press announcements.”

Indeed, the European Parliament’s resolution begins with references to Mary Shelley’s Frankenstein, the myth of Pygmalion and other “androids with human features”.

Proponents of the Council’s open letter have also pointed to Sophia, the humanoid robot who was granted citizenship from Saudi Arabia.

Noel Sharkey, co-founder of the Foundation for Responsible Robotics and one of the letter’s signatories, expressed his concerns about the impact of ‘show robots’ like Sophia on law and policy makers.

“It’s very dangerous for lawmakers. They see this and they believe it, because they’re not engineers and there is no reason not to believe it,” he said in an interview with Politico.

An out for manufacturers?


Signatories of the letter suggested granting legal status to robots would ultimately serve manufacturers looking to absolve themselves from blame in the event of an accident.

“By adopting legal personhood, we are going to erase the responsibility of manufacturers,” said Nathalie Navejans, a French law professor at the Université d’Artois and one of the letter’s architects.

However, while the Council opposes the EU’s proposal, the open letter advocates for  “unified, innovative and reliable laws” to regulate AI and robotics, especially as more semi-autonomous and autonomous robots are likely to hit the market in coming years.

Mikaela Dery

Mikaela is a staff writer and recent philosophy graduate. Her thesis looked at the ethical implications of AI and its potential as a force for good. She is now only a little bit scared that robots will take over the world.