“We know from human history that developments in technologies over the centuries, ranging from the Industrial Revolution through to the invention of the automobile, then airplanes and so forth, the landscape of progress is littered with human casualties. People die because of these things being tested.”
A provocative statement, the first thing you hear in episode 1 in the third season of the Digital Download podcast that I did with host Paul Sutton last month in which we discussed emerging technologies and communications and what’s predicted to hit the mainstream within the next two to three years.
That statement was intended to sharpen focus on the dilemmas confronting all of us when we want to try something new or radically different to advance our knowledge, our well-bring, our development, where there are risks in doing so. It’s an extreme example of risk and consequence on the journey to that next level, my thinking prompted in part by an MIT experiment a few years ago that illustrated how aiming for the goal of progress can out-weigh all other balancing considerations including ethics and morality.
I used the 2018 Gartner Hype Cycle on Emerging Technologies as the foundation for much of the great conversation Paul and I had that ranged far and wide on an area that is of direct relevance to communicators in what we must do to understand and prepare ourselves, our employers and our clients for what’s just around the corner. We also talk about how AI – far better thought of as ‘augmented intelligence’ rather than ‘artificial intelligence’ – can be of direct and measurable benefit to each of us as communicators.
If you’re not getting ready, then this conversation might help you focus on what’s important to you and help you get cracking. It’s not too late!
Think of this as your bookmark for 2019.
Transcript
Paul and I started our conversation with some general chat, moving into our topic of keen interest just over 5 minutes in. The transcript was produced by Rev.com; please review the original audio to verify accuracy.
Paul: … [05:22]. Today, we’re going to talk about, like I said, emerging technologies, and we’re going to focus on something that you raised, which is Gartner’s latest version of its hype cycle for emerging technologies. Now, that it’s been running this for, I believe, over 10 years now. Again, I mean, we’ve chatted, you said that you followed it for that amount of time. What’s your experience of how that has changed that hype cycle over that time? Do you think it sped up?
Neville: Yes. As more technologies have emerged … I mean, most of the stuff you see in the current one, which was published some weeks ago now, weren’t on anyone’s radar, and what we’ve mentioned earlier, it wasn’t even in the imagination a decade ago. Bearing in mind that the hype cycle for emerging technologies we’re talking about was aimed at tech people, aimed at the CEOs of big organizations. It wasn’t intended really for a broader audience.
Neville: But I remember when I first came across it in 2006. I thought communicators needed to pay attention to this because this is part of the landscape. Whether you’re interested in the tech or not, this will impact the workplace at some points.
Paul: Yeah, yeah.
Neville: Every year there’s something new in here, and getting your head around some of this stuff frankly, often is I think a stumbling block for many because it introduces all sorts of red herring. From our point of view, as communicators, it’s not so much the detail of the technology. It’s what it lets people do or not, as the case might be, the impact it has on organizations, and on people and how they connect with each other.
Neville: We’re seeing some constants in there. Bear in mind that what Gartner looks at in the many things they do, are sort of divided up into when they think the mainstream will be achieved in terms of these technologies getting attention from people. Anywhere from nought, like immediate, to 10 years and more than that. It’s been going for, yeah, 12 years basically. Stuff they talked about back then, that was 10 years, is definitely in some versions, in there, that’s still languishing. It’s still not there. The prediction game is a bit tricky, I think.
Neville: We’ve seen so much shifting and some of the phrases, every time I see them, I have to look it up somewhere. “What? What does that mean exactly?” It requires that kind of learning, most of the time.
Paul: Do you use this for yourself, for your own learning in your jobs with IBM and the Internet Society and you as a consultant? Does Gartner’s work in this area help guide you into what you think you should be learning about? How much real attention do you pay to this particular?
Neville: I pay a lot of attention to it and I would add, by the way, that when they started there were one or two hype cycles. Now Gartner, the publisher, has at least 20 different type cycles that cover different areas of technology in particular.
Neville: I pay attention to the emerging tech more than any other. If I was deeply immersed in some of the subject matter. For instance, blockchain. There’s hype cycles on just that, or the nuances of that. I’m not. I want the broad picture that I can dive into things I’m interested in. I’m interested in all of it, generally speaking. Much of it does definitely inform my own thinking on what I pay attention to, and I pay attention greatly to trends with technology. That’s something I’ve always been interested in.
Neville: I don’t pay attention to many other kinds of trends, but in the context of my other interest, which is communication, that’s the driver for me. I do pay attention to a lot of it. I’m not an expert by any means in most of it, but I know people who are and I know often, typically, where to go to if I want to get more information about any of these, but they give me a good overview of the broad landscape of which we are navigating, let’s say, and the impact some of this can have on us. Some of it definitely will have on us and much of it is already having an impact.
Paul: Absolutely. I think that is such a valuable thing for people listening is to get that broad overview because it’s something that I’ve learned to do over many years, and same as you, is try and spot what is coming, what is happening, and then having done that, you go away and learn about a specific topic that seems to be gaining more traction and that’s such the way I approach things.
Paul: Anyway, this report has identified, among other things, five distinct emerging technologies and it surveys 2000. It’s a huge report, and it’s identified these five distinct emerging technologies, which in its phrasing, “Will blur the lines between human and machine.” Today, we’re going to dig into some of that as it relates particularly to the communications industry. Where’s the best place to start with this?
Neville: Yeah, I would say a familiar topic because it’s very close to heart of PR certainly, and that’s AI, artificial intelligence. Indeed, what you mentioned the five areas, these are tech groupings that Gartner has grouped together, a lot of these. AI is part of what they grouped. They call this group Democratized AI, and it’s actually not a bad label to attach to it, but AI is a huge topic in itself, and I know I started talking about AI in a more meaningful sense back in 2015 or so. I can remember thinking and writing on my blog occasionally on this, where this is something that obviously we need to pay attention to, but more in the sense of this actually is invaluable for communicators.
Neville: Not, “The robots are coming.” None of those things with a humanoid looking creature sitting next to in the workplace. This is to do with the benefit it brings you. I latched onto something, and I can’t claim to be the inventor of this phrase, but it seemed perfect to me, which is not ‘artificial’ intelligence, but ‘augmented.’ Meaning it augments your own intelligence. It provides you with the means to do things you absolutely cannot do without this or it would take you an inordinate length of time to do and the accuracy level is likely to be quite low.
Neville: It enables you to achieve things. I’m talking about the kind of repetitive type tasks that we all have on our workloads that are usually quite boring, perusing lots of documents or trying to find searching on databases, stuff like that. We’re not very good at that kind of thing. Yet, computer algorithms are excellent at that, and that’s a great use example of what AI can do for you.
Neville: And doing that kind of heavy lifting, that presents you the results of something that it analyzes in super quick time, something that might take you a week, it will do in an hour. That’s taking big data sets, but even littler stuff than that, that then lets you use your skill level, which is interpreting meaning, and we’re not yet at the stage where AI can do that bit and I have this discussion all the time with a lot of people, and quite passionate sometimes. I believe that moment is very soon where AI will be able to gain the insights from the data that it has analyzed.
Paul: When do you say very soon, what does very soon mean to you?
Neville: To my mind on a simple level, it’s kind of now.
Paul: Okay.
Neville: But the value level such as, for instance, going through large data sets, huge amounts of data to find comparative statements, and then interpreting the meaning of those comparative statements in the context of X or Y, they can’t really do that now, we do that. Yet, the time will come when they can do that.
Neville: I don’t say here that this is therefore something we will all suddenly going to be out of jobs in that regard. Far from it. This will augment us even more, and it will be further complimentary, and enable us to do things that actually we don’t do, typically, because it’s too much of a chore, it costs too much or it takes up resources that we can use elsewhere, in other ways at the moment.
Paul: Okay.
Neville: So, we’re seeing experiments with that, and I call them experiments. There’s some people say, “No, this is real time it’s doing it now.” In the medical area, for instance, that analyzes what we call diagnostics done by initial diagnostics of patients for instance and an AI algorithm for want of a better phrase is describing it. We’ll look at that and make recommendations based on it. That’s on a simple level.
Paul: Okay.
Neville: Here, we’re talking about something that enables something to draw real meaning and indeed Gartner talks a bit on that. Indeed, one of they describe it as is conversational AI and we’re not there yet. And indeed, Gartner reckons that that’s at least five years away, they reckon.
Neville: And, don’t forget, we might see a bit like autonomous driving, you’ll see that happening, but this all refers to when it’s in the mainstream.
Neville: So, we’ll see instances of it a lot earlier, but that is typically in a type of organization or particular country, but not mainstream, and that’s really the key thing to understand about all these things.
Neville: So, this is still evolving. All of this. And, so, we’re seeing things that I mentioned about AI that are in our workplaces already. We can employ some of these on a simple level that enable us to do certain things. Virtual assistants is a good example; these so called digital virtual assistants, I might call them, that would be in the area of things, perhaps Siri, Cortana, Alexa, are in that sort of area. That they’re relatively simple technologies right now; they respond to commands based on pre-programmed knowledge.
Neville: And, they’re not conversational, really, but that’s an indicator of what’s potentially possible. These things are incremental in their development, and I truly believe we’re gonna see some really amazing things.
Neville: Look at the car; look at what’s happening with cars. And I’m not talking about autonomous driving. I’m talking about what’s called the driver interface. The tech in your car. That these are getting more sophisticated on a daily basis in the ability to present you with information. That does not necessarily interfere with your driving. A lot of it is actually in the background, understanding your behavior and responding accordingly, learning as it does so. That’s an interesting thing.
Neville: But, as epitomized by, for instance, what Stephen Waddington talked about in recent episode. AI in the PR industry, what the CIPR’s AI panel is doing. These are great initiatives, and it helps us learn what’s happening, what might happen and what it means to us and what we need to do about it right now that’s then permeating that knowledge out through the PR community. And that’s a great thing, so I think we need to see a lot more of that, and it makes sense for us as communicators to embrace some of this stuff, even though some of it is a bit mind boggling, I must admit.
Neville: So, the work of that helps, and the more we find it easier to understand what to pay attention to, is the key thing. So, we look for guidance in that area, which is where the people at the CIPR can come into play, in guiding us on what is it we should pay attention to, and AI is one of the topics that is definitely worth paying attention to.
Paul: Yeah, and the link ups between the Gartner hype cycle and the AI in the CIPR panel’s initial findings seem to match up. When I look at the two, I mean, I guess it makes sense; they would match up. But, as an example, the panel looked at a period of three to five years ahead. And, have tried to map out how PR and comms jobs might be effected by technologies and then, and then, you have things within the hype cycle, like you say, virtual assistants, which you’re saying is sort of two to five years out.
Paul: There’s things like 5G technology, for example, which is two to five years out. So, there are technologies that Gartner has identified as coming through in that time frame which would match up with the things that Stephen Waddington said, for example.
Paul: One of the other things, actually, that I noticed when I compare them, is, and it’s something that Stephen talked about, AI from a comms sense, is misunderstood. People don’t understand what that means in a practical sense.
Paul: When you look at Gartner, it splits out AI, and you’ve got artificial general intelligence, you’ve got conversational AI, like you mentioned. You’ve got neural nets. I mean, there’s so many areas to this, which is difficult to get your head around, in any sense.
Neville: Yeah, it’s true. I think it’s the nature of tech, isn’t it, Paul? I mean, you’ll know as I do, that when you talk to tech people in the organization, you get stuff that is dense; it’s hard to understand it a lot of times.
Paul: Yes.
Neville: So, our job as communicators, is to take all this dense stuff and translate it into terms that everyone else can easily understand. That is a prime role we have. Indeed, that makes us look valuable to our colleagues and to our employers and our clients.
Neville: But, therefore, we need to understand this stuff. So, therefore, we look to the leaders in this. People like Wadds and that CIPR panel for instance, who are already doing that, and to others who are active in the space and talk about it a lot.
Neville: Hence, talking about AI in more emotional terms than the abstract depth, if that doesn’t sound like a contradiction that we hear typically, the worst thing you can do is look for the definition of AI on Wikipedia. That’ll make your eyes boggle, totally.
Neville: But, it certainly gives you a pointer, and so you could then go elsewhere. You then find conflict in definitions everywhere you look, and that’s tricky. I tend to talk about it, as I mentioned earlier, in the context of augmented intelligence, looking at what it can do as opposed to telling you what it is. And, I rely on the techies to do that, and I often point people and say, well, go here and you find a definition, so companies, Gartner has definition. The likes of people like IBM and IBM Watson, they have definitions, you’ve got other organizations who are in this area, too.
Neville: A lot of the tech journalists who are following this have definitions, so you can find simplified if you want to know what does AI mean. Then, it gets a little more confusing when you then see people like Gartner talking about these splits; these different elements of it.
Neville: And a lot of what you mentioned things like machine learning gear; a subset of AI, if you like. So, do we need to know the detail of that? Well, yes. We do, actually. If we are going to help our colleagues understand what it means to them, what it means for them as well, in their work, and then for their employers or their clients.
[Promo insert 19:30-20:38]
Paul: You talked about virtual assistants a little there, and you said that things like Siri and Alexa and all those things now, are quite simple, effectively, because you ask them a question. You get an answer.
Paul: What, in your view, is the role that that sort of technology could take in, say, three years time? I mean, how do you see that evolving? Is it purely to become more conversational, or is it actually the machine learning side of that? Takes on, like you say, analyzing data, now.
Paul: So, we can ask a question of Google, and it comes back with something it has actually analyzed rather than something that was found in a search.
Neville: I want this to make a genuine difference to my life. I want it to manage my day to day activities. So, to me, what I see in some of the organizations who are working in areas like this, and I again refer back to IBM again, not because I worked there, but some things I learned there and some things I’ve been paying attention to since. In terms of the day to day things that we all do.
Neville: Good example: your calendar, your appointment schedule, your email, your contact list and all that stuff that we have on our computers, whether it’s Microsoft Office or whatever app you’re using, I want an AI to run that for me.
Paul: Right.
Neville: So, for instance, that would manifest itself in ways like, for example, that the AI would look at all my contacts and figure out which are the ones that are important to me. Which are the ones I should be paying attention to. Which are the ones I could safely ignore. How would it do that?
Neville: A mix of things; this is where being networked is key to this, because your contacts are then analyzed by the AI, and looking at those contacts in the context of every other contact, but also via how they present themselves online and where they do.
Paul: Okay.
Neville: And it scores them. Again, it will do things like, for instance, that will manifest itself in ways like when you get a request for a meeting. The AI can make a simple decision as to whether you should give this person your time or not. Or yes, that’s person is actually very important because he or she is connected to these other six people who are also important to you, and will then make that decision.
Neville: And it will tell you, “I’ve accepted the meeting for you on Thursday at 10:00 AM with so and so.”
Neville: Some people I know, because we talk about this, find this terribly creepy. I can’t wait, to be frank, for it to do that, and also suggest to me, people I ought to be contacting, and it will filter my email and do stuff like that. That’s a very prosaic example of something so day to day, and yet, we spend a lot of time on this sort of activity.
Neville: Much of it is not wasted time, but it’s an extraordinary amount of time doing simple tasks. This kind of tool could do that. So, that’s one little example. The other, using examples of Alexa and so forth, is a genuine conversational AI, and again, that’s in Gartner’s area, of where that’s some years out yet. But, it will have a conversation with you. Think of some of the science fiction movies and TV series you see. That’s the kind of thing we’re talking about.
Neville: Going back to the Knight Rider of the 1970s, right up to modern times, where you are having an actual conversation with a machine that doesn’t look like a human, necessarily. It was an algorithm on your computer probably, do it’s a disembodied voice, it could be a hologram.
Neville: So, it could be something that enables you to make a connection with it in ways that are genuinely intelligent, so there we’re entering into this area that the definers of these terms look at. Okay, does it exhibit some kind of consciousness? Is it sentient, as we define it? Does it exhibit human-like behavior, without the intervention of a human?
Neville: And, we’re moving into those areas. We’re seeing experiments. None of these, though, I would argue are genuinely intelligent, like a human. So, it may be many decades out before we get to that, but in the meantime, we will have technology that is serviceable and has utility that works for us in ways that we’re perfectly happy with, and it won’t be perfect, either. We know from human history that developments in technologies over the centuries, ranging from the industrial revolution through to the invention of the automobile, then airplanes and so forth. The landscape of progress is littered with human casualties. People die because of these things being tested, and we’re already seeing that now with things like autonomous driving experiments where a car crashes and people have been killed.
Neville: So, to be honest, Paul, I don’t see that being any different in terms of the progress that we’re going to make with AI for instance.
Neville: Not to paint a gloomy picture, but to me, it’s just reality of accepting that. It’s not dismissive by any means; it’s a non-emotional look at this landscape, and looking at human history, so we will see those things on that road to progress. It’s up to us as a society to decide is that price right or not. And that’s something that philosophers could debate that as well as ethicists.
Neville: But, this is the landscape we’re embarking upon.
Paul: Yeah, and you said something then about the level of intelligence being one of these markers. And someone who’s thoughts I’ve followed on this for, I don’t know, five or six years now, is Ray Kurzweil, who’s Director of Engineering at Google.
Paul: His whole point is that technological innovation is rapidly accelerating and continues to accelerate. Now his original prediction, which he’s got a great record of, was that the technological singularity, and that, for people listening, means when machines effectively become smarter than humans, he’s predicting that by 2045, which is not that far away actually when you think about it. And he says, one of his other things is that by 2029, he thinks, that is when a robot will effectively pass the Turing test, which is, as you said, the level of intelligence, a human level of intelligence.
Paul: I’ve seen arguments that some robots already have passed the Turing test. I’ve not been convinced when I’ve read about them, but it’s not so far ahead that it’s impossible that in the next 10 years, there will be machines that have, as you’ve said, that human level of intelligence I don’t think.
Neville: No, I would agree with you. I think if we look at some examples of what’s happening now, for example recently Sony introduced a little pet dog that’s a robot. It’s not made to try and look like a real dog, this is very clearly an artificial device of some kind, but all the reports I’ve seen about it ranging from some of the tech journals who’ve been reviewing them to journalists writing in mainstream media publications, all in the United States, have been wowed by the reality presented by the interaction with this device.
Neville: And that’s an indicator of something, yet it’s certainly not intelligent by any definition. Will it get there? Well, it’s another step on the way, because this is an evolution of a device that was first developed and put out there a year or two back. And this is totally different.
Neville: I don’t know whether you’ve seen a film, this is a few years ago now, called ‘The Robot and Frank’? A story of a man who was retired, and his son bought him a robot to keep him company. And this thing is definitely with science fiction at the time, but this was literally five years ago this film, maybe a bit longer. Things have changed in that short time that this looks terribly credible. That you could say, “Yup, I could see this within 10 years, exactly this sort of thing.”
Neville: So, we’ve got signs everywhere that the blurring of science fiction and science reality is definitely upon us. I think Ray Kurzweil’s prediction about that sort of time range is possible.
Paul: Absolutely. And one of the things you were talking about with virtual assistants has effectively, or could have, a significant impact on things like voice search, which is now really taking off. So, the whole area from a communications perspective of voice-enabled marketing is something that I’ve been paying more attention to in the last year specifically. And when I talk about it to people, people are still a bit skeptical at this stage about whether they’re going to need basically a voice marketing strategy. But to my mind, that is something that’s got to happen in the next couple of years. If you’re not paying attention to it now, you will be playing catch up.
Neville: Yeah exactly, you’ve nailed it completely. We need to be paying attention to this right now. I mean I talk to my computer all the time, and my phone. It’s signs of what’s coming. That said, there are some very black clouds on the horizon with this. But those dark clouds are also to do with video, which is that the fakery that is now at the fingertips of people with bad intent to make something appear to be different than what it actually is. So, you’ve got already examples of video where the famous ones that I keep see being referenced everywhere, Barack Obama making a speech where none of what he’s saying in the video he actually said at all. The speech never took place, yet you look at the video, and you would never imagine that this wasn’t Barack Obama making that speech. It’s video clips stapled together in a sense, but you absolutely cannot see any of the joins anywhere.
Neville: The same with audio, the same with voice. So the wherewithal is there for fakery and other bad behaviors to exhibit themselves and manifest themselves. And it’s something that you are hoodwinked by it. And that to me is something that the tech people behind all this have got to find a solution for, otherwise this will not be trusted, and it will not progress beyond where it currently is.
Neville: I believe there will be solutions, but I also believe that there’s going to be some big trip ups along the route, like with most things. I think we’ll see people being fooled by this, I think we’ll see scams and all sorts of things happening. Hopefully nothing worse than that. But that’s part of the landscape, and it needs to be addressed.
Paul: And how much of a threat do you think that whole fake news, fake video, fake stories, fake everything is as a threat to communications?
Neville: I think it’s huge, Paul. I won’t go as far as some people who say it’s a threat to democracy. Potentially I suppose it is if you’re talking about rigging elections and therefore stealing an election quite literally. But yes, the fakery is genuinely disturbing. I’ve been involved in discussions with others talking about this as part of the human condition, this behavior. It’s probably true, it doesn’t mean to say that’s okay, fine then, we just accept it. We need to find a way to combat the bad actors at play here, and ensure on the one hand that all we do is not like that, we don’t manipulate, we don’t do things that throw suspicion on our credibility and our ethics. We behave in ways that are without question, according to the standards that we’ve all set ourselves as standards of behavior. And that extends itself onto how we use tools like this to convey messaging, to educate people, to persuade others to a point of view.
Neville: So, that’s our job, to do it like that. And what other people do is going to happen. We just need to be sure that we don’t behave that way.
Paul: Yeah, absolutely, totally agree.
Paul: One of the things that stood out for me, which I find a fascinating idea, is the brain computer interface, which it says is five to ten years away. So we’re not talking about next week we’re going to be able to do this, but I long for the day when I have an interface with my computer just with my thoughts.
Neville: Well, it’s part of what Gartner labels in this group as ‘do-it-yourself bio hacking.’ And I love that actually, but that then puts it contextually easier to understand what it might mean. It is still, to many I think, in the realms of science fiction, but according to Gartner, it’s five to ten years away before it gets to the mainstream. So, I’m willing to accept the premise of that, although I’ve not seen it, and I’ve not paid close attention, although I have looked, I’ve not seen many people talking about this in any way that makes me interested in finding out more.
Neville: So, does that mean it’s not happening? No, of course not. There’s stuff going on behind the scenes, there’s people paying a lot of attention to some of these things and quietly experimenting. So we may suddenly see an emergence of talk about this, but I think Gartner have done a pretty good job with this kind of thing. They talk about this being the beginning, in which case that’s probably why it’s worth giving it a little bit of attention. It’s five to ten years out, they’re saying that this is the beginning of what they call the trans-human age, where hacking biology, and extending humans, will increase in popularity and availability.
Neville: So they talk about, and stuff we’re hearing about already, neural implants, where the talk now is on ethics and the humanity of doing things like this. So, they’re saying that we’re just seeing the start point, expect to see more about this. In which case I’m willing to look at it from that point of view. I don’t know enough about that yet. If it means what we think it means, then yes, like you, I can’t wait for something like this where I can be plugged into something in some way.
Neville: Others equally I know are horrified by the notion, so you’ve got that side of it to consider. Hence if this is a topic, and others like this that are likely to gain traction somehow and have impacts on what we do, we need to understand it in order to discuss with our colleagues who are horrified by it, to explain to them why it’s nothing to be horrified with or not as the case might be, or is as the case might be. So, part of our job is to understand it, so we need to pay attention to this.
Paul: Yeah, absolutely. And like you said, there are red herrings in this anyway which will fall away and everything. So who knows, it could be one of them. But it’s just the idea of it is fantastic.
Paul: Okay, well listen, we’re kind of out of time, but that’s been really fascinating. There’s loads of great ideas to explore there. To wrap up, how do you recommend that people go about their day jobs, they’re doing what they’re doing in the world of communications and digital, and then there’s all of this stuff happening in our periphery vision that we know, I think, we should be keeping an eye on, but isn’t impacting us right now. How do you think people should really keep up to date with it? Are there specific things they should be reading, or listening to? What are your thoughts on that?
Neville: Yeah, putting in a simple thing, without giving people, you know, here’s a 10 point list of stuff you need to do, the first thing I would say is be curious. And that’s often easier said than done. I tend to be exceptionally curious. Some people call me nosy, but I’m always wondering what’s that? And how does that work? And who’s doing what? Constantly.
Neville: So, I think I’m lucky frankly, because that’s how I find out a lot of things. Pay attention to the likes of Gartner. They’re not the only game in town, but they have longevity, they are relatively open in sharing much of their proprietary research. So hence you will see summaries of this stuff. Pay attention to bodies like the CIPR, if you’re in the PR business certainly, and the AI panel there. Look at their findings. If you Google, you will find lots of people talking about this, so try and find folk here in the UK, as opposed to only in America, because perspectives are often quite different.
Neville: I notice the Guardian has had some great reporting on AI recently, the Telegraph as well. The BBC is running an AI series, or have run, perhaps I missed it, maybe I’ll catch it up on iPlayer, this week literally, on AI. So, things like that, you gain some knowledge. Then really look on social channels through people you know, and see who’s talking about this. Googling will find that. But also links through people like the CIPR and AI Panel will also help you find people like that.
Neville: That’s the best advice I can give, Paul.
Paul: Yeah, fantastic. Okay.
Paul: Well listen, thank you so much for your time, really appreciate it.
Neville: My pleasure.
Paul: Where can people find you online if they’d like to talk to you further?
Neville: Twitter, @jangles is where I am most of the time. My website, nevillehobson.com. The two podcasts that I do, smalldataforum.com and firpodcast.net.
Paul: Thank you again for your time.
Neville: My pleasure, Paul, thanks.
Paul: You can subscribe to Digital Download on iTunes, Google Podcasts, or wherever else you get your podcasts. And if you’ve got any ideas for future topics you’d like to see covered, or people you’d like to hear from, contact me on Twitter, where I’m @ThePaulSutton. Thank you for listening. [37:01]
Related reading filed under ‘Artificial Intelligence‘
- 2018: the year of chatbots, AI and digital personal assistants
- Who should die when a driverless car crashes? Q&A ponders the future
- How to put AI into cognitive perspective
(Photo at top by Franki Chamaki on Unsplash)