Monday, November 20, 2017

Janesville - a town that explains Trump and also why you shouldn't judge or blame people for being poor

You’re put in a town that implodes when the car plant closes down and 9000 people lose their jobs. GM was a mess – incompetent management, old models, a company that failed to innovate. As if that wasn’t enough Janesville is hit with Biblical levels of rain (climate change?). Journalism at its best, by a Poulitzer-winning writer, written from the perspective of the people affected. Want to know why working America is pissed? Read this book. Told with compassion but realism, through the lives of real people in a real town.
For over 100 years they had produced tractors, pick-ups, trucks, artillery shells and cars. Obama came and went, the financial crisis hammered them deeper into the dirt but while the banks were bailed by the state, the state bailed on the people. On top of this a second large, local employer, Parker Pens, outsourced to Mexico but the market for upmarket pens was also dying. The ignominy of being asked to extend your wages by a few weeks by going down to Mexico to train their cheaper labour was downright evil.
Then the adjunct businesses started to fail, the suppliers, trades, shops, restaurants, nurseries for two income families – then the mortgage and rent arrears, foreclosures, house prices fall, negative equity. As middle-class jobs go they push down on working lass jobs and the poor get poorer.
“Family is more important than GM” this is the line that resonated most with me in the book. In this age of identity politics, most people still see a stable family and their community as their backstops. The left and right have lost focus on this. The community didn’t lie down – they fought for grants, did lots themselves to raise money, help each other – but it was not enough.
Grants for retraining were badly targeted, training people for reinvention is difficult for monolithic, manufacturing workforces. Some of it was clearly hopeless, like discredited Learning Style diagnosis, overlong courses of limited relevance to the workplace or practice. Problems included the fact that many couldn’t use computers, so there was huge drop out, more debts and little in the way of workplace learning. Those that did full degrees found that what few jobs there were had been snapped up while they were in college – their wages dropped the most, by nearly half. One thing did surprise me, the curious offshoot that was anti-teacher hostility. People felt let down by a system that doesn’t really seem to work and saw teachers as having great holidays, pensions and healthcare, while they were thrown out of work. The whole separation of educational institutions from workplaces seems odd.
Jobs didn’t materialise. What jobs there are, exist in the public sector – in welfare charities and jails. A start-up provided few jobs, many commuted like gypsies to distant factories. Even for those in work, there was a massive squeeze on wages, in some cases a 50% cut, sometimes more. In the end jobs came back but real wages fell. Healthcare starts to become a stretch. But it’s the shame of poverty, using food banks, homeless teenagers and a real-life tragedy 200 pages into the book that really shakes you over.
The book ends with the divide between the winners and losers. This is the divide that has shattered America. Janesville is the bit of America tourists, along with East and West coast liberals don’t see. The precariat are good people who are having bad things done to them by a system that shoves money upwards into the pockets of the rich. Looked down upon by Liberals, they are losing faith in politics, employers, the media, even education.
Wisconsin turned Republican and Trump was elected. The economist Mark Blyth attributes the Trump win to their wages squeeze and fall in expectations, even hope. People got a whole lot poorer and don’t see a great future for their kids.

A more relevant piece of work than Hillbilly Elegy, with which it is being compared. Final thought –why are journalists in the UK not doing this? Answer – they’re bubble-wrapped in their cozy London lairs, part of the problem and too lazy to get out and do their jobs… writing the same stories about why they don’t like social media, failing to see that they are the purveyors, not so much of fake new but inauthentic news, irrelevant news, news reduced to reporting on shadows within their own epistemological cave… one exception - John Harris.

 Subscribe to RSS

Saturday, November 18, 2017

Jared Lanier: Dawn of the New Everything: A Journey Through Virtual Reality

As a fan of VR I was looking forward to this book. Lanier is often touted as the inventor, father or, more realistically, the guy who name up with the phrase ‘Virtual Reality’. I’m not sure that any of this is true, and to be fair, he says as much late in the book. The most curious thing about the book is how uninteresting it is on VR – it’s core subject. Lot’s on the early failed stuff, and endless musings on early tech folk, but little that is truly enlightening about contemporary VR.
My problem is that it’s overwritten. No, let me rephrase that, it’s self indulgently overwritten. I’ve always liked his aperçus, little insights that make you look at technology from another perspective, such as ‘Digital Maoism’ and ‘Micropayments’ but this is an over-long ramble through an often not very interesting landscape. He has for many years been a gadfly for the big tech companies but the book is written from within that same Silicon Valley bubble. Critical of how Silicon Valley has turned out he's writing for the folk that like this worls and want to feel it's earlypulse.
He finds it difficult to move out of that bubble. I’m with him on the ridiculous Kurweil utopianism but when Lanier moves out into philosophy, or areas such as AI, it’s all a bit hippy dippy. On AI there’s a rather ridiculous attempt at a sort of Platonic dialogue that starts with a category mistake VR = -AI. No – they are two entirely different things, albeit with connections. Although interesting to describe AI as a religion (some truth in this) as it has it has transhuman aspects, it’s a superficially clever comment without any accompanying depth of analysis.

I was disappointed. You Are Not A Gadget was an enlightening book, this is a bit of a shambles.

 Subscribe to RSS

Sunday, November 12, 2017

7 ways to use AI to massively reduce costs in the NHS

I once met Tony Blair and asked him “Why are you not using technology in learning and health to free it up for everyone, anyplace, anytime?” He replied with an anecdote, “I was in a training centre for the unemployed and did an online module – which I failed. The guy next to me also failed, so I said ‘Don’t worry, it’s OK to fail, you always get another chance…. To which the unemployed man said 'I’m not worried about me failing, I’m unemployed – you’re the Prime Minister!” It was his way of fobbing me off.
Nevertheless, 25 years later, he publishes this solid document on the use of technology in policy, especially education and health. It’s full of sound ideas around raising our game through the current wave of AI technology. It forms the basis for a rethink around policy, even the way policy is formulated, through increased engagement with those who are disaffected and direct democracy. Above all, it offers concrete ideas in education, health and a new social contract with the tech giants to move the UK forward.
In healthcare, given the challenges of a rising and ageing population, the focus should be on increasing productivity in the NHS. To see all solutions in terms of increasing spend is to stumble  blindly onto a never-ending escalator of increasing costs. Increasing spend does not necessarily increase productivity, it can, in some cases, decrease productivity. The one thing that can fit the bill, without inflating the bill, is technology, AI in particular. So how can AI can increase productivity in healthcare:
1. Prevention
2. Presentation
3. Investigation
4. Diagnosis
5. Treatment
6. Care
7. Training
1. Prevention
Personal devices have taken data gathering down to the level of the individual. It wasn’t long ago that we knew far more about our car than our own bodies. Now we can measure signs, critically, across time. Lifestyle changes can have a significant effect on the big killers, heart disease, cancer and diabetes. Nudge devices, providing the individual with data on lifestyle – especially exercise and diet, is now possible. Linked to personal accounts online, personalised prevention could do exactly what Amazon and Netflix do by nudging patients towards desired outcomes. In addition targeted AI-driven advertising campaigns could also have an effect. Public health initiatives should be digital by default.
2. Presentation
Accident and Emergency can quickly turn in to a war zone, especially when General Practice becomes difficult to access. This pushes up costs. The trick is to lower demand and costs at the front end, in General Practice. First, GPs must adopt technology such as email, texting and Skype for selected
patients. There is a double dividend here, as this increases productivity at work, as millions need not take time off work to travel to a clinic, sit in a waiting room and get back home or to work. This is a particular problem for the disabled, mentally ill and those that live far from a surgery. Remote consultation also means less need for expensive real estate – especially in cities. Several components of presentation are now possible online; talking to the patient, visual examination, even high definition images from mobile for dermatological investigation. As personal medical kits become available, more data can be gathered on symptoms and signs. Trials show patients love it and successful services are already being offered in the private sector.
Beyond the simple GP visit, lies a much bigger prize. I worked with Alan Langlands, the CEO of the NHS, the man who implemented NHS Direct. He was adamant that a massive expansion of NHS Direct was needed but commented that they were too risk averse to make that expansion possible. He was right and now that these risks have fallen, and the automation of diagnostic techniques has risen, the time is right for such an expansion. Chatbots, driven by precise, discovery techniques, can start to do what even Doctors can’t, do preliminary diagnosis at any time 24/7, efficiently and in some areas, more accurately, than most Doctors. Progress is being made here, AI already has successes under its belt and progress will accelerate.
3. Investigation
Technology is what speeds up the bulk of investigative techniques; blood tests, urine tests, tissue pathology, reading of scans and other standars tests, have all benefited from technology. In pathology, looking at tissues under a microscope is how most cancer diagnosis takes place. Observer variability will always be a problem but image analysis algorithms are already doing a good job here. Digitising slides, and scans also means the death of distance. Faster and more accurate investigation is now possible. Digital pathology and radiology, using data and machine learning, is the future.
4. Diagnosis
AI already outperforms Doctors in some areas, matches them in others and it is clear that progress will be rapid in others. This does not means that Doctors will disappear but it does mean they, and other health professionals, will have less workload and be able to focus more on the emotional needs of their patients. Lots of symptoms are relatively undifferentiated, some conditions rare and probability-based reasoning is often beyond that of the brain of the clinician. AI technology, and machine learning, offers a way forward from this natural, rate-limiting step. We must accept that this is the way forward.
5. Treatment
Robot pharmacies already select and package prescriptions. They are safer and more accurate than humans. Wearable technology can provide treatment for many conditions, as can technology provided for the patient at home. Repeat prescriptions and on-going treatment could certainly be better managed by GPs and pharmacists online, further reducing workload and pressure on patients time. Above all patient data could be used for more effective treatment and a vast reduction in waste through over-prescribing.
Treatment in hospitals through automated robots, such as TUG, are already delivering medication, food and test samples, reducing the humdrum tasks that health professionals have to do, day in, day out. Really a self-driving car, it negotiates hospital corridors, even lifts, using lasers and internally built AI maps. The online management of treatement regimes would increase complaince to those regimes and save costs.
6. Care
Health and social care are intertwined. Much attention has been given to robots in social care but it is  AI-driven personalized care plans and decision support for care workers along with more self-care that holds most promise and is already being trialed. AI will help the elderly stay at home longer by providing detailed support. AI also gives support to carers. It may also, through VR and AR, provide some interesting applications in autism, ADHD, PTSD, phobias, frailty and dementia.
7. Medical education
Huge sums are spent on largely inefficient medical training. There are immense amounts of duplication in the design and delivery of courses. AI created content can create high quality, high-retention content in minutes not months (WildFire). Adaptive, personalized learning gets us out of the trap of batched, one size fits all courses. On-demand courses can be delivered and online assessments, now possible with AI-driven digital identification, keystroke tests and automated marking make assessment easier. Healthcare must get out of the ‘hire a room with round tables, a flipchart and PowerPoint (often awful)’ approach to training. The one body that is trying here is HEE with their E-learing For Health initiative. Online learning can truly reduce costs, increase knowledge and skills at a much lower cost.

It is now clear that AI can alleviate clinical workload, speed up doctor-patient interaction, speed up investigation, improve diagnosis and provide cheaper treatment options, as well as lower the cost of medical training. We have a single, public institution, the NHS, where, with some political foresight, a policy around the accelerated research and application of AI in healthcare could help alleviate the growing burden of healthcare. Europe has 7% of the world’s population, 25% of its wealth and 50% of its welfare spending, so simply spending more on labour is not the solution. We need to give more support to healthcare professionals to make them more effective by taking away the mundane sides of their jobs through AI, automation and data analysis.

 Subscribe to RSS

Monday, November 06, 2017

47% of jobs will be automated... oh yeah...10 reasons why they won’t….

I’ve lost count of the times I’ve seen this mentioned in newspapers, articles and conference slides. It is from a 2013 paper by Frey and Osborne. First, it refers only to the US, and only states that such jobs are under threat. Dig a little deeper and you find that it is a rather speculative piece of work. AI is an ‘idiot savant’, very smart on specific tasks but very stupid and prone to massive error when it goes beyond its narrow domain. This paper errs on the idiot side.
They looked at 702 job types then, interestingly, used AI itself (machine learning) which they trained with 70 jobs, judged by humans as being at risk of automation or not. They then trained a ‘classifier’ or software program with this data, to predict the probability of the other 632 jobs in being automated. You can already see the weaknesses. First the human trained data set – get this wrong and it sweeps through the much larger AI generated conclusions. Second, the classifier, even if it is out by a little can make wildly wrong conclusions. The study itself, largely automated by AI, rather than being a credible forecast, is more useful as a study of what can go wrong in AI. Many other similar reports  company in the market parrot these results. To be fair, some are more fine-grained than the Frey and Osborne paper but most suffer from the same basic flaws.
Flaw 1: Human fears trumps tech
The great flaw is over-egging the headline. The fact that 47% of jobs may be automated makes a great headline but is a lousy piece of analysis. Change does not happen this way. In many jobs the context or culture means that complete automation will not happen quickly. There are human fears and expectations that demand the presence of humans in the workplace. We can automate cars, even airplanes, but it will be a long time before airplanes will fly across the Atlantic with several hundred passengers and no pilot. There are human perceptions that, even if irrational, have to be overcome. We may have automated waiters that trolley food to your table but the expectation that a real person will deliver the food and engage with you is all too real. 

Flaw 2: Institutional inertia trumps tech
Organisations grow around people and are run by people. These people build systems, processes, budget plans and funding processes that do not necessarily quickly lead to productivity gains through automation. They often protect people, products and processes that put a brake on automation. Most organisations have an ecosystem that makes change difficult – poor forecasting, no room for innovation, arcane procurement and sclerotic regulations. This all militates against innovative change. Even when faced with something that saves a huge amount of time and cost, there is a tendency to stick to existing practice. As Upton Sinclair said, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Flaw 3: Low labour costs
What is often forgotten in such analyses is the business case and labour supply context. Automation will not happen where the investment cost is higher than hiring human labour, and is less likely to occur where labour supply is high and wages low. We have seen this recently, in countries such as the UK, where the low-cost labour supply through immigration has been high, making the business case for innovation and automation low. Many jobs could be automates but the lack of investment money, availability of cheap labour and low wages makes the human bar quite low. There are complex economic decision chains at work here that slow down automation.
Flaw 4: Hyperbole around Robots
Another flaw is the hyperbole around ‘robots'. Most AI does not need to be embedded in a humanoid form. Self-driving cars do not need robot drivers, vacuum cleaners do not need humanoid robots pushing them around. Most AI is invisible, online or embedded onine or in the electronics of a device. As Toby Walsh rightly says, when he eviscerates certain parts of the Frey and Osborne report, there’s no way robots will be cutting your hair or serving your food by weaving through busy restaurants with several plates of food, any time soon. The ‘Reductive Robot Fallacy’, is the anthropomorphic tendency to equate AI with robots along with the idea that robot technology has to look like us and do things the way humans do them. The vast majority of robots, AI-driven machines that perform a useful function, do not look like humans, many are online and almost invisible.

Flaw 5: Hyperbole around AI
AI is an idiot savant. It is incredibly smart at specific things in specific domains but profoundly stupid at flexible and general tasks. This is why entire jobs are rarely eliminated through automation, except for very narrow, routine jobs, like warehouse picking and packaging, spray painting a car and so on. Accountants use spreadsheets, restaurants use dishwashers, mixers and microwaves. Most automation is partial, as the general worker still outfoxes AI. There are severe limitation to AI in many fields, not least the sheer amount of processing power needed to fuel the applications as well as limitations in the maths itself. There is also a great deal of hype around the 'cognitive' capabilities of AI, led I suspect by that misleading word 'intelligence'. AI is not conscious and has little in the way of cognitive skills. It may win at GO but it doesn;t know it has won.
Flaw 6: Garbage-in, garbage-out
This common flaw, as Walsh rightly spotted, was that the training data in the Frey and Osborne paper was either a 0 or 1 probability of automation but the outputs were between 0 and 1. This is an example, not so much as garbage in-garbage out, as binary-in range-out. You can see this manifest itself in some absurd predictions around jobs that are unlikely to be automated, as well as underestimates in others, like hairdressing, waiters and cleaners. Beware of AI generated predictions.
Flaw 7: Heuristics
The process of automation in employment is a messy business with many variables. Heuristics can help here. First we can categorise jobs first as: Cognitive v Manual; then… Cognitive routine, Manual routine, Cognitive non-routine, Manual non-routine. But even the distinction between manual and cognitive is not mutually exclusive. Few manual jobs require no knowledge, planning or problem solving. These can be useful rules of thumb but the world rarely falls neatly into these binary of four-way categories. Yet they often lie at the heart of predictive analysis. Beware of simplistic heuristics.
Flaw 8: Human bias
Bias in analysis is all too common. Take just one axemple; the analysis of education. The people doing the analysis are often academics or people who have an academic bent. The Frey and Osborne paper conflates education into one group, as if kindergarden work was the same as academic research. The routine aspects of education, the fact that most teachers, trainers and lecturers do a lot of admin and work that is actually routine and repetitive, is conveniently ignored. Google, Wikipedia and online management and learning has already eaten into the employment of librarians and teachers. It is a displacement industry. Take one service – Google. As the task of finding things became super-fast, the process of learning, research and teaching became quicker. Library footfall falls, as we no longer have to troop off to the library to get the information. Amazon has commoditised the purchase of books. Commoditisation is what technology is good at and what Marx recognised as a driving force in market economies. Educators don’t like to hear this but they have a lot to gain here. Teaching is a means to and end not an end in itself. It has and will continue to be automated, not by robots but by smart, personalised, online learning.
Flaw 9: Activities get automated, not jobs
In truth most jobs will be partially automated. This has been going on for centuries with technological advances. Sure, horse grooms and carriage drivers no longer exist but car mechanics and taxi drivers do. Typesetters have been replaced by web designers. ATMs have simply changed the nature of bank tellers, not completely automated the process. Indeed, in many professions the shift has been towards more customer service and less mechanical service. What matters is not necessary the crude measure of ‘jobs’ being automated but rather activities’ being automated. By activities, we mean specific tasks, competences and skills. 
Flaw 10: New jobs
65% of today’s students will be employed in jobs that don’t exist yet.” This is the sort of exaggeration that feeds bad consultancy. Most will be doing jobs that have existed for some time. Many will simply be doing jobs they didn’t plan on doing (and don’t like) or jobs that have changed somewhat through automation. Predicting which jobs or activities get automated is easy compared to predicting what new jobs will be created. The net total is therefore difficult to establish. Fewer people may be needed in certain areas but new jobs will be created, especially in services.
To be fair, more recent analyses have moved on to more fine grained concepts and data. McKinsey did a detailed analysis of 2,000-plus work activities in 800 occupations, with data from the US Bureau of Labor Statistics and O*Net. They quantified the time spent on these activities and the technical feasibility of automating them. NESTA dis a breakdown of specific skills.
The crude headlines will continue but we’re starting to see more detailed and realistic analysis that will lead to better predictions. This is important, as educational bodies need to be able to adapt to what they will be required to teach as well as what they teach and to whom. As the change accelerates, education and training will need to be more sensitive and adaptive to the changes. This means more accurate predicting of demand and quick adjustments in supply. I’d go for around half of the Oxford figures with the caveat that more service jobs will be created, so that the net total will be 10-20%. There will be no sudden shift in months but a gradual bite by bite into activities within jobs. This is the field I work in, invest in, write and talk about (see WildFire), so I'm not coming at this from the sceptic point of view. AI will change the world and the world of learning but not in the way we think it will.

 Subscribe to RSS