Letting people know that you have something interesting to ‘sell’ is just the first step. Once they learn about your product there is still the small matter of completing the sale and delivering the service/item purchased to the customer.
I wanted to share a recent experience I had as a potential customer. I learnt about a product through an ad on a social media platform. They were also advertising an offer on the product for £7.50. This was a small vendor selling a speciality product.
When I clicked through to the checkout I got a rude shock. The total price was now showing as £11.50! They had added £4 for delivery. I could get free delivery if my order was more than £20 but I didn’t want to order that many items! I decided that the value I was getting was not matching the total cost I would have to pay (so called ‘net value’ = – £4). So I decided to abandon my ‘full’ basket.
This is called the ‘Abandoned Basket’ problem – and it is seen in bricks-and-mortar stores as well where people simply leave a shopping basket with items and leave the store.
So one might have thought that is the end of the story? But no! Things have become a lot more ‘technical’.
A few hours later I got an email. Before showing me the real price or anything that might scare me away they had taken my contact details! That means they could try and change my mind at a later date. Unlike in a bricks-and-mortar store an e-retailer can chase after prospective customers (GDPR notwithstanding).
The email was not a normal ‘you have items in your basket – click here to complete your purchase’. No way. They were a lot cleverer than that. They had done their research. The email identified high-delivery costs as a common reason why people don’t complete their purchase. It also attempted to justify £4 worth of shipping costs when the item was coming from within UK (I don’t know why?).
But I was still not convinced and I ignored the email. Then a few hours later I received another email. This one was offering me £3 off if I spent at least £10. This meant my net value went from – £4 to -£1 and I did not need to spend a lot more than what I was willing to.
In the end they successfully converted an abandoned basket into a sale and I received the items on time and in good condition!
We can see three main elements in this ‘success’ story:
Getting a foot in the door by capturing customers details before they can ‘run away’ – this gives them a second chance at converting the customer
Understanding what made the customer run away in the first place and attempting to arrive at an acceptable ‘middle-point’
Ensuring that the product/service delivery is pain free to encourage the customer to order again
Simply put, it is the ratio of all the government debt carried by a country divided by the productive capacity of the country. This is similar to the debt-to-earnings ratio used to evaluate the financial worth of companies as well as individuals.
Typically if an individual or a company has a bad debt-to-earnings ratio they will find it tough to get a loan or attract investment. But debt-to-GDP doesn’t work the same way because not all countries are the same!
How does the Debt-to-GDP Ratio work?
Debt by itself is not bad. Similarly, a rising debt-to-GDP ratio may not be a bad thing. Why? Because borrowing is not bad if it leads to a growth in productivity. Productivity here is linked with one or more objective measures like income growth, we are not talking about subjective measures like ‘personal’ growth.
For example, if you as an individual borrow money to buy a car that you will drive as a taxi during the weekends as a second job, then while your debt has increased so has your income (assuming everything goes well). As long as your income (which can be used to measure productivity) increases faster than your debt, things will be fine. Obviously, individuals are limited by how much they can increase their income within the time frame of the borrowing. But when it comes to a country the limits are lot more relaxed. A country can always find productive uses for the money it borrows. Some examples include: strengthening infrastructure, improving education and improving connectivity (both national and international).
If productivity of an individual or a country increases faster than debt then they become an attractive target for future loans.
The flip side is more interesting! If a person spends the borrowed money in meeting day-to-day expenses then it is unlikely their income will rise faster than the debt, if it rises at all. Such an individual will find themselves in trouble very quickly with their creditors. When it comes to a country this logic starts to fail. Some countries end up attracting money even if things are bad all over. In fact, they keep attracting money even if they are not doing so well and are at the heart of a global financial crisis!
We can see this clearly in Figure 1 where USA and GBR (UK) have been borrowing heavily. Their debt-to-GDP ratio has a ‘step up’ right after both countries started borrowing to spend their way out of the 2008 Financial Crisis. The interesting thing is that this data is mostly till 2018 and we expect a similar (perhaps larger) ‘step up’ due to Covid-19 relief spending when we review the data for 2020!
A Question of Trust
‘In God we Trust’ is the official motto of the USA. For the financial world it is ‘In the Dollar we Trust’. That explains why, where other countries have had massive backlash to high debt-to-GDP ratios in terms of no access to cheap borrowing, rating downgrades and currency devaluations we can see time after time, during a crisis, funds from all over the world flowing into the US financial system allowing it (the US govt.) to borrow cheaply! This is similar to how when facing a storm all fishing boats rush back into the harbour. This is one reason why it was relatively easy for the US to propose borrowing massive amounts of money (some $3 trillion) to support its economy through the Covid-related lockdown and beyond.
There is a similar narrative of stability and productivity around the UK. Always seen as a strong player in the world of financial services and second only to the USA in the financial sector. UK has similarly been borrowing a lot more without the corresponding growth in GDP. The first Conservative Government of David Cameron (2010 onwards) sought to stem the tide of borrowing by introducing ‘austerity’ and ending the massive spending spree of the previous government that was dealing with the 2008 financial crisis. There were all kinds of positive signs that despite the impact of Brexit on growth, the debt growth was coming under control and ‘austerity’ would end for good. All this was before Covid-related lockdown.
Only the data from 2020 will tell the scale of ‘step-up’ in the debt-to-GDP graph.
If you look at Figure 1 the debt-to-GDP ratio of all countries presented is heading only in one direction – ‘up’! It is either a gentle slop of a hill or a steep step of a plateau. As a point India (orange dots) may seem like the odd-one-out but that is not the case as in the recent budgets the Govt. has been forced to let the deficit widen (data is only till 2018) and also there are doubts as to the true figure of the Govt. debt.
The big tip of the iceberg question is ‘what happens next’? If the US/UK are the safe-harbours what happens when they become less and less safe, especially after Covid? Would that reduce their appeal? What ‘safe harbour’ will all that money seek? When does it become unsustainable? Who are the debt-holders who will take the decision to declare the situation unsustainable? Does a smaller population help in faster recovery?
To give an example, the cash rich economies like China, who have a massive surplus, behave like a fast-food chain. They want you to keep eating more of their food but also not fall ill. Their food is money and the delivery mechanism is through the world of finance. It is in their interest that their target markets are healthy so that they can continue buying from China. Where China cannot find a big market they plant the seeds of one by financing infrastructure projects to improve its access to trade routes. So it will be interesting to see how the net-exporting countries behave over the next 1 year. This also makes the current UK 5G ban on Huawei equipment very interesting.
Blast from the Past
As a final remark I need to mention what one of my favourite economists John Maynard Keynes said about this topic. Politicians remember the first half of his advice: it is fine to run a deficit (i.e. spend more than what you can earn) in times of great need (e.g. the Great Depression). But they forget the rest of his advice that the Govt. must balance the budgets during the times of plenty.
This is common sense. When you have good income levels it is logical to use that to reduce your debts so that in the time of scarcity you have a lower debt burden and more money left for your personal needs.
But this is also political suicide – no elected Govt. would survive if it told people that it was going into austerity mode when things were going well . This is one of the big reasons we see a constant increase in debt-to-GDP across the world as shocks and crisis are never in short supply and it is unpopular to claw-back when things are going well. The thinking here is that if you grow your productivity (e.g. measured by income) fast enough you can always keep getting a bigger loan and stay one step ahead of the debt-collector.
Or if you are ‘big enough and transparent enough’ as a country, people will always be willing to lend to you (what else will they do with their money?).
 Two examples where this did not happen from the World’s largest democracy India: demonetisation and maintaining high domestic fuel price when internationally crude oil prices have fallen. In both the cases strong steps were taken by the elected Govt. and they still came back to power with a larger majority. Unfortunately, in both the cases Govt. managed to loose the advantage gained from these tough steps due to mismanagement.
The ongoing COVID-19 related suppression of economic activity will impact incomes across the board. Irrespective of how the income is generated (e.g. business, employment, self-employment) the impact can be either positive, negative or uncertain.
Positive for those whose incomes are not disrupted or are increased due to demand (e.g. PPE manufacturers, health-care staff, delivery drivers).
Negative for those whose incomes have been disrupted without any relief in sight (e.g. restaurants, people who have been laid off with bad prospects for getting another job).
Uncertain for those who have been furloughed or laid off but with good prospects for getting a job.
With anything between 6-11% contraction predicted, the majority of the cases should fall in the ‘Uncertain’ category (I predict 4-7%) who will move to either Positive or Negative category over the next year or so.
Why do I say that?
I say it because there will be different responses to the challenges, from restructuring, process improvements to failing fast and even retraining/reskilling (both at individual level and at an organisational level). Depending on how effective a business is at transforming itself to survive, a lot of the people in the ‘Uncertain’ category will quickly transition to the ‘Negative’ category.
One of the main transformation patterns is to carry out process improvements/restructuring with increased automation so that costs decrease and production/service elasticity increases as incomes fall initially but then recover over the medium and long term.
This group of people who jump from Uncertain to Negative is the BIG problemas this can trigger a long term contraction in consumption. How can we help these people reskill and retrain so that they can re-enter the job market? What can we do to support people as the pressure to automate increases as business income contracts?
Universal Basic Income
One possible answer to many of these questions is Universal Basic Income. If we provide people guaranteed support with basics (e.g. food, rent) then we are not only cutting them some slack but also decoupling ‘survival’ with ‘growth’.
Universal Basic Income (UBI) is a simple concept to understand: all citizens get a basic income every month irrespective of how much they earn. This is guaranteed from the day they turn 18 till the day they die. They may also get a smaller percentage from the day they are born to help their parents with their upkeep.
With UBI a recession will not impact the basics of any household. It will provide a safety net for families and individuals. It will also allow people to develop their skills and innovate.
There are a few wrinkles in this. Firstly, how should we prevent inflation as ‘free money’ is handed out to people? One proposed mechanism is to use a different class of money from the currency of the country. This UBI money cannot be used as a store of value (i.e. can’t be lent for interest), just for limited exchange (e.g. food, rent). This is similar to the US Supplemental Nutrition Assistance Program (SNAP) – also known as ‘food stamps’ (https://en.wikipedia.org/wiki/Supplemental_Nutrition_Assistance_Program)which can be exchanged for certain types of food. Many other countries have tried this experiment (such as Finland, USA, Canada etc.). This form of money should also ‘expire’ periodically so that people don’t start using them in a ‘money-like’ way.
Another challenge is how do you convert the ‘temporary’ UBI money into ‘permanent’ currency. This is required for the businesses accepting UBI money to be able to pass it down the supply chain (both locally and internationally). For example if you buy all your groceries with UBI money and it is not convertible to currency then how will the grocery shop pay it’s staff and suppliers. What if the suppliers were importing groceries from other countries – how would they convert UBI money to any international currency. In SNAP, the stamps are equivalent to money. It doesn’t have the same impact as UBI as its cost is a fraction of the total US GDP (0.5%).
Still, one should never let a good crisis go to waste! Time to think differently.
In this post let us think about what happens next as we start to come out of the Covid-19 related lockdown.
No country can claim to be immune from the economic effects of the Covid-related lockdown. However, as countries start to emerge from the lockdown some will rebound faster than others.
What is happening now?
Let us next look at where we are today. Today, large number of people and businesses have seen the flow of money reduce to zero. The expectation of a return on investment is low for a large section of the economy. That said, certain sectors are doing quite well or as normal (e.g. groceries, online retail) as they are getting overflow business.
In this situation with little or no money going to people / businesses someone has to step in and be that ‘credible borrower’ and borrow on behalf of those who are struggling. This is the Government as the ‘credible borrower’ which then passes the borrowed money on to its citizens in a low-waste manner one hopes. One point here is that it is easy for a Government to print money rather than borrow, but that can lead to inflation without actual growth – so called ‘jobless growth’.
We can take the current situation as artificial suppression of demand and supply (as people loose incomes and stores are forced to close/reduce visitors).
This can also be understood as a scenario where blood supply to an organ in the body has been blocked. The body reacts in the very short term by reducing the function of that organ and rushing out chemical support to suppress pain but in the long term the body is severely impacted unless the block can be removed and/or another path can be found to deliver the required quantity of blood.
What happens Next and How to Deal with it?
It all comes down to effective planning and effective use of people, processes and tools.
Businesses that have or are able to quickly get the required plans in place for short and long term changes to how they work will benefit from overflow business.
People who are able to re-skill or move from impacted areas to areas of new opportunity will be able to benefit from continued employment during the rebuilding period.
Both the above things should allow some blood to flow to the organ but it does not restore normal supply nor fixed the original damage that resulted in a block.
Repairing the Damage
The repair will start once the lockdown ends. Those countries that release the lockdown earliest (and are able to ride the second wave of infections) will have ‘first movers’ advantage towards normalisation. This should also promote local business that step in to fill the gap from imports where possible.
The key point to keep in mind here is that we will not go back to status quo. Just as scar tissue is never as smooth as the torn skin it replaces. We will loose some businesses. Some people will fall into debt and be unable to recover without help.
Due to loss of incomes, social distancing and widespread work-from-home we will find demand continues to be suppressed for some time to come. This will be especially true for ‘non-essential’ goods. This means the suppressed demand must be unlocked using some of the options we will discuss below.
Who sinks/swims is down to how they prepared during the crisis for the post-crisis period (i.e. if they did not look to change business-as-usual and let a good crisis go to waste then they will sink) and how effectively they can implement those strategic plans in the coming months. This is a good example of Darwin’s Survival of the Fittest.
Who will survive:
Those who are quick to plan and implement new processes that allows them to generate revenue.
Those who have deep pockets to fall back on, for the next 12 months (at least)
Those who are able to focus on their strengths and optimise resources – when we look at (2) we must remember “Markets can remain irrational for longer than you can remain solvent” (by John Maynard Keynes)
Those who are directly benefiting from the crisis (short term survival)
Those who enjoy a good name in the market or are ‘expected’ by the market to bounce back quickly
But what is the Recipe for Success? What should we do more of as a business?
Advertise: Replace front-office with a slick website, smartphone app and/or virtual agent (even a chat-bot helps handle the first level of queries)
Process transformation: Reduce the need for manual processes in business operations – this is not something only multi-million pound business need to do! In fact this is something everyone needs to do!
Digitise and Automate as much as possible – from fundamental building block apps (e.g. billing) to more advanced planning, optimisation and prediction apps (Here is a golden chance for AI at the lower price-point. Or even local AI consultancy)
Concentrate on strengths and focus your resources on the service/product that provides the greatest rewards – enable home delivery where possible – smart phones + hybrid/electric vehicles should reduce cost of operations and bring home delivery to the same price point as in-store
Don’t stop innovating.. innovation is the hidden strength of any business (large or small!)
As an individual, facing an uncertain future in terms of employment, lot of the above points are just as relevant (once the context is changed):
Advertise your existing skills and experience (make a website, LinkedIn profile), talk about your interests and hobbies! Blog!
Look inward: Look at all the good stuff you have done, all the mistakes you have made and the lessons you have learnt. Try changing something small about yourself that you feel will improve the way you feel about yourself. For me this was ensuring I take in a lot of outdoor play time with my kids!
Prepare your tools: make a CV, take stock of where you have been and where you want to get too! You won’t get another chance like this to plan your career!
Concentrate on your strengths: reduce expenditure, improve efficiency by doing the important things and ignoring things that lead to waste of time, money or both. One personal example: we started cooking more at home which resulted in not only money saving but also us discovering new things that we could make at home!
Don’t stop learning! Now is the time to take a risk. Make sure you use all the tools available to engage with people who are leaders in your field of learning as well as fellow students – this can be anything – from cooking to a language
Don’t stop thinking and creating. Write a short story, create a new dish, draw a picture, change the layout of your living room! These act as massive confidence boosters
Additional Thoughts: Automation
Automation was on the rise before Covid. The bigger players have already moved online and use automation enabled IT therefore continue to sell effectively (albeit within constraints). But the contact-less nature of the solution to this problem will push app/online interaction even more. As this happens, it makes it easier to automate the interaction. Two small examples:
Pizza shops now only support cashless delivery, no collection. Therefore, all my interaction with the pizza shop is through their website or an app (e.g. JustEat). The pizza is placed on my doorstep and I hardly even see the delivery person as they back away more than 2 meters and leave as soon as they see me pick-up the pizza.
Food stalls in various food markets have started home deliveries (again cashless and contact-less). Earlier they would hire staff to manage long queues, today they operate behind a slick website (that you can throw up in a few hours), a scheduling tool, and WhatsApp messaging to personalise the interaction.
This effect when combined with the long term trend of more people working from home (which is bound to accelerate now) is an opportunity for small business to deliver local services through different app-based platforms involving lots of automation (to make it cheaper). The smaller players have to make use of the same force-multiplier tools, platforms and channels as the bigger players right now! The most basic one is the ability to accept online orders and payments.
Now that people don’t travel for work then they no longer form a captive market for food vendors, coffee shops and bars. But these things can come to their doorstep! With automation enabled IT the cost of home delivery can be managed especially with the added benefits of scale.
Finally: I am still waiting for the day I can order Starbucks coffee to my home for the same price (if not cheaper) as what I get in the stores. Starbucks could open coffee-making kitchens in different areas and serve the area from there. Automation will help by providing seamless links between different stages, AI-based planning and prediction of demand.
Covid-19 has been wreaking havoc across the globe. But this was also expected given the fact that we have not been the best of tenants for Mother Earth.
All the doom and gloom aside, Covid-19 and the mass lockdowns are teaching us a very important lesson about the future of automation and technology.
In a single line:
A secure future requires smart people working on smart devices using smart infrastructure!
Figure 1 shows the interactions between Smart People, Things and Infrastructure.
The Covid-19 crisis, which has brought life to a standstill, has exposed the weakness of our automation maturity. Services from haircutting to garbage collection have been trimmed back, mostly as a proactive step. Whatever automation we do have, has helped tremendously (e.g. online grocery shopping) even as people’s behaviour changed overnight as panic set in.
So what is the panic about? What are the basics that we need? The panic is about running out of resources like food due to a collapse of supply chains which have been optimised to reduce warehousing costs.
Supply chains (Logistics) are heavily dependent on people. From farmers growing crops, workers building stuff to drivers transporting the product to the shops (or directly to your home).
This is not the only critical system to break down if large number of people fall ill at the same time.
Healthcare is another area that has been impacted because of the lockdown. Care has to maintained to protect vulnerable people which means minimising contact. This increases the vulnerability due to isolation.
Education has also been impacted with schools closed and exams postponed or cancelled. This might not seem like a big problem but consider the impact in future results.
Another area of concern are the utility networks. Can we truly survive with disruptions to our electricity or water networks?
If the automation is improved in the above areas then we would become more resilient (but not immune) to such events in the future which is as difficult to achieve as it sounds!
Before a drone can be piloted remotely for hundreds of miles or a truck driven under human supervision from a port to a local warehouse we need robust telecom infrastructure to provide reliable, medium-high bandwidth, low-latency, temporary data connections.
This magic network has three basic ingredients:
Programmable network – devices that can be treated like ‘software’ and provide the same agility > significant progress has already been made in this area.
Network slicing – to efficiently provide the right resource to the requesting service > lot of work ongoing in context of 5G networks
Closed-loop, light touch orchestration – to help people look after a complex network and help make changes quickly and safely when required (e.g. providing a reliable mobile data link to a drone carrying a shipment of food from a wholesaler to a shop, for remote piloting use-case) > significant progress has been made and lot of ongoing work
Using such a network we can build other parts of the puzzle such as smart roads, smart rails and then smart cities. All of these help improve automation and support increasingly light touch automation use-cases.
Once we have the Smart Infrastructure we need Smart Things to use them.
For Logistics and maintaining a robust supply chain during a pandemic we need a fleet of autonomous/remotely supervised/remotely piloted vehicles such as heavy-lift drones, self-driving trains/cars/ships/trucks. We also need similar assistance inside warehouses and factories with robots carrying out the operations with human supervision (so called Industry 4.0 / Lights-out factory use-case).
Healthcare – requires logistics as well as the development of autonomous personal health monitoring kits that augment the doctor by allowing them to virtually examine a patient. These kits need to become as common as a thermometer and should fulfil multiple functions.
For scenario related to caring for vulnerable people, semi-autonomous robots are required that can do lot of the work (e.g. serve dinner).
In case of a lockdown, a teacher should be able to create virtual classrooms with similar level of interactivity (e.g. via AR/VR) as in a real classroom.
To maintain water, electricity and other utilities we need sensors that provide a snapshot of the network as well as actuators, remote inspection and repair platforms etc.
For all of this to be done remotely (e.g. in a lockdown scenario) we need a robust telecoms network. Clearly, without a data connection people would no longer be able to deal with the economic, mental, physical and emotional shock caused by a lockdown.
So who will be these people who can pilot/supervise a drone, carrying a crate of toilet rolls from a warehouse in Bristol to a shop in Bath from a remote location? Well trained people of course!
This requires two important things:
Second Job: Everyone should be encouraged to take up a second discipline (of their interest) in a semi-professional capacity. This helps increase redundancy in a system. For example, if you are a taxi driver and have an interest in radio – maybe your second job can be of a maintenance technician.
Thinking beyond data-science and AI: Tech is everywhere and AI is not the final word in hi-tech. People should receive everyday technology training and if possible advanced technology training in at least one topic. E.g. everyone should be taught how to operate a computer but they should also be allowed to choose a topic for deeper study, like security, software development, IT administration etc.
Augmentation technologies should be made more accessible, including providing basic-training in Augmented and Virtual Reality systems so that in case of a lockdown, human presence can be projected via a mobile platform such as a drone or integrated platform within say a forklift or a truck.
Adaptation: This is perhaps the most important. This means not leaving anyone behind in the tech race. Ensuring all technologies allow broad access. This will ensure that in times of trouble technology can be accessed not only by those who are most able to deal with the issues but also those who are the most vulnerable.
All of the above require the presence of smart things!
Thus we have four themes of Logistics, Healthcare, Education and Utilities running across three layers: Smart People -> Smart Things -> Smart Infrastructure. That is what Covid-19 has taught us. A very important lesson indeed, so that the next time around (and there WILL be a next time), we are better prepared!
Digitisation of services is all around us. Where we used to call for food, taxi, hotels and flights we now have apps. This ‘app’ based economy has resulted in a large number of highly specialised jobs (e.g. app developers, web designers). It also impacts unskilled or lower skilled jobs as gaps in the digitisation are filled in with human labour (e.g. physical delivery of food, someone to drive the taxi).
The other side of digitisation is automation. Where manual steps are digitised, the data processing steps can involve human labour (e.g. you fill a form online, a human processes it, a response letter is generated and a human puts it in an envelope for posting it).
In case of a fully automated and digitised service, processing your data would involve ’machine labour’ (with different levels of automation [see http://fisheyefocus.com/fisheyeview/?p=863]) and any communication would also be electronic (e.g. email, SMS). One very good example of this is motor insurance, where you enter your details via a website or app, risk models calculate the premium on the fly and once payment is made all insurance documents are emailed to you. Only involvement of human labour is in the processing of claims and physical validation of documents. This is called an ‘e-insurer’.
Automation involves replacing or augmenting human labour with machine labour. Machines can work 24×7 and are not paid salaries – thus the cost savings. However, machines need electricity and infrastructure to work and they cannot self-assemble, self-program or self-maintain (so called Judgement Day scenario from the Terminator series). Human labour is still required to develop and maintain an increasingly large number of (complex) automated systems. Human labour is also required to develop and maintain the infrastructure (e.g. power grids, telecom networks, logistic supply chains) that works alongside the automated systems.
So humans earn indirectly from machine labour but in the end automation and digitisation help save large amounts of money for companies by reducing operational costs (in terms of salaries, office space rentals etc.). Another side-effect is that certain types of jobs are no longer required as automation and digitisation pick up pace.
Impact on Consumption
Now we know from basic economics that all consumption results in someone earning an income.
For a company, the income is the difference between the value of what they sell and their total costs (fixed + variable)in making and selling it.
A company will increase digitisation and automation with a view to increase their total income. This can happen by targeting automating processes that increase sales or decrease costs. A company will also automate to keep levels of service so as not to lose customers to competition but there will always be some element of income increase involved here as well.
If costs are reduced by digitisation (e.g. less requirement for a physical ‘front office’) and/or automation (e.g. less number of people for the same level of service), it can lead to loss or reduction of income as people are downsized or move to suboptimal roles (e.g. a bank teller working in a supermarket). This also contributes to the ‘gig’ economy where apps provide more ‘on-demand’ access to labour (e.g. Uber).
People consume either from what they earn (income) or from borrowing (e.g. credit cards and loans). If the incomes go down then it can either impact consumption or in the short term lead to increased borrowing. This decrease in consumption can impact the same companies that sought an increase in income by automation and digitisation.
Automation and Digitisation leads to cost savings by introducing electronic systems in place of a manual process.
If less people are required to do the same job/maintain a given level of output then employers are likely to hire fewer new workers and/or reduce the size of the workforce over time.
This will reduce the income of people who are impacted by redundancies and change of job roles.
This in turn will reduce the consumption of those people which may hit the very same companies that are introducing automation and digitisation
This in turn will further push the margins and thereby force further reduction in costs or increase in consumption from some quarter….
And we seem to be trapped in a vicious circle!
This Sounds Like Bad News!
So looking at the circular nature of flows in an economy, as described in the previous section, we can predict some sort of impact on consumption when large scale digitisation and automation takes place.
As an aside, this is a major reason why ‘basic income’ or universal income is a very popular topic around the world (read more: https://en.wikipedia.org/wiki/Basic_income). With basic income we can guarantee everyone a minimum lifestyle and thereby promise a minimum level of consumption.
The actual manifestation of this issue is not as straightforward as our circular reasoning, from the previous section, would indicate. This is because the income of a company depends upon several factors:
External Consumption (exports)
Amount consumed by those whose income increases due to automation and digitisation
Amount consumed by those whose income decreases due to automation and digitisation
Labour costs attributed to those who implement and support automation and digitisation
Labour costs attributed to those who are at risk of being made redundant due to automation and digitisation (a reducing value)
Variable costs (e.g. resource costs)
Exports can help provide a net boost to income – this external consumption may not be directly impacted by automation and digitisation (A&D). It may be indirectly boosted if the A&D activities lead to imports from the same countries.
The two critical factors are (2) and (3): namely how much of the output (or service) is sold to people who benefit from A&D and how much is sold to those who do not benefit from A&D.
If a company employs a large number of people who can be made redundant via A&D activities and a large portion of their consumers are those whose incomes will be impacted by A&D then we have a very tight feedback loop – which can lead to serious loss of income for the employer, especially if it ties in with an external shock (e.g. increase of a variable cost like petroleum).
On the other hand if a company caters to people whose incomes increase with A&D (e.g. software developers) then the impact to its income will be a lot less pronounced and it may even increase significantly.
What works best is when a company can sell to both and has enough space for both A&D activities and manual labour. This means they can make money from both sides of the market. A good example of this are companies like Amazon, McDonalds and Uber who have human components integrated with A&D which then acts as a force multiplier.
Using this framework we can analyse any given company and figure out how automation will impact them. We can also understand that in the short term A&D can have a positive effect as it acts as a force multiplier, opening new avenues of work and creating demand for different skills.
Real issues can arise if automation is stretched further to complex tasks such as driving, parcel delivery and cooking food. Or digitisation is taken to an extreme (e.g. e-banks where you have no physical branches). This will have a large scale impact on incomes leading to a direct reduction in demand.
One way to force a minimum level of consumption is for the government to levy special taxes and transfer that income as it is to those who need it. This will make sure those who are unskilled or have basic skills are not left behind. This is a ‘means tested’ version of basic income similar to a benefits system.
The next step will be to re-skill people to allow them to re-enter the job market or start their own business.
I recently read a book called ‘Metldown’ by Chris Clearsfield and Andras Tilcsik (Link: Meltdown).
The book provides a framework to reason about complex systems that can be found all around us (from the cars we drive to processes in a factory). The word ‘system’ is used in the generic sense where it means a set of components interacting with each other. Each component expects some sort of input and provides some sort of output.
The decomposition of a system into components can be done at different levels of detail. The closer we get to the ‘real’ representation more complex can the interaction between components (or sub-systems) get. Imagine the most detailed representation of a computer chip which incorporates within it a quantum model of the transistor!
Let us look at some important points to consider when trying to understand a complex system. These allow us to classify and select appropriate lines of attack to unravel the complexity.
1. Complexityof Interaction
Complexity arises when we have non-linear interactions between systems. Linear interactions are always easier to reason about and therefore to fix in case of issues. With non-linear interactions (e.g. feedback loops) it becomes difficult to predict effects of changing inputs on the output. Feedback loops if unbounded (i.e. not convergent) can lead to catastrophic system failures (e.g. incorrect sensor data leading to wrong automated response – which worsens the situation).
Solution: Break feedback loops with linear interactions. Add circuit breakers or delay in reaction where not possible to break feedback loops.
2. Tight Coupling
When two or more systems are tightly coupled then it is quite easy to bring down all by taking down just one. Slack in the interaction between systems requires a system to be able to deal with imprecise, inaccurate and missing inputs while preserving some sort of functional state.
Solution: Allow clear statement of inputs, outputs and acceptable ranges. Provide internal checks to ensure errors do not cross component boundaries. Provide clear indication of the health of a component.
Any system (or group of systems) requires monitoring to provide control decisions. For example, when operating a car we monitor speed, fuel and the dashboard (for warning lights). Any system made up of multiple components/sub-systems should ideally have a monitoring feed from each of the components. But many times we cannot directly get a feed from a component, or it can lead to information overload and we rely on observer components (i.e. sensors) to help us. This adds a layer of obfuscation around a component. If the sensor fails then the operator/controller has no idea what is going on or worse has the wrong ideawithout knowing it and therefore takes the wrong steps. This is a common theme with complex systems such as nuclear reactors, aeroplanes and stock markets where indirect measurements are all that is available.
The other issue is that when a system is made up of different components from different providers, each component may not have a standard way of providing status. For example in modern ‘cloud enabled’ software we have no way of knowing if a cloud component which is part of our system has failed and restarted. It may or may not impact us depending on how tightly coupled our components are to the cloud component and if we need to respond to any restarts (e.g. by flushing cached information).
While it is difficult to map any system approaching day-to-day complexity to figure out where it can fail or degrade we can use techniques such as Anomalising to make sure cases of failures are recorded and action taken to prevent future occurrences. The process is straight forward:
Gather data – collect information from different monitoring feeds etc. about the failure (this is why monitoring is critical)
Address Root Cause – monitor replaced components, new procedures while making sure root cause is identified (e.g. was the component at fault or is it a deeper design issue? Are we just treating the symptom and not the cause?)
Ensure solution is publicised so that it becomes part of ‘best practice’
Audit – make sure audit is done to measure solution effectiveness
As most interesting systems involve a human:
operator (e.g. pilot)
controller (e.g. traffic controller)
supervisor (e.g. in a factory)
beneficiary (e.g. patient wearing a medical device)
dependent (e.g. passenger in a car)
Then the big question is how can we humans improve how we work with complex systems? Or the other way around: How can complex systems be improved to allow humans to work with them more effectively?
There is a deceptively simple process that can be used to peel back some of the complexity. We can describe this as a ‘check-plan-and-proceed’ mechanism.
Gather how the interaction with a given system has been in the previous time frame (week/month/quarter) [Check]
Create a list of changes to be tried in the next time frame [Plan]
Figure out what can be improved in the next time frame [Proceed]
This allows the human component of a complex system to learn in bite-sized chunks.
This also helps in dealing with dynamic systems (such as stock markets) where (as per the book) it is the weather prediction equivalent of ‘predicting a tornado rather than simply rainfall’. When the check-plan-and-proceed mechanism is abandoned we get systems running amok towards a ‘meltdown’ – be it a nuclear meltdown, stock market crash, plane crash or collapse of a company.
In the last few years buzzwords such as Machine Learning (ML), Deep Learning (DL), Artificial Intelligence (AI) and Automation have taken over from the excitement of Analytics and Big Data.
Often ML, DL and AI are placed in the same context especially in product and job descriptions. This not only creates confusion as to the end target, it can also lead to loss of credibility and wasted investment (e.g. in product development).
Figure 1 shows a simplified version of the framework for automation. It shows all the required ingredients to automate the handling of a ‘System’. The main components of this framework are:
A system to be observed and controlled (e.g. telecoms network, supply chain, trading platform, deep space probe …)
Some way of getting data (e.g. telemetry, inventory data, market data …) out of the system via some interface (e.g. APIs, service endpoints, USB ports, radio links …) [Interface <1> Figure 1]
A ‘brain’ that can effectively convert input data into some sort of actions or output data which has one or more ‘models’ (e.g. trained neural networks, decision trees etc.) that contain its ‘understanding’ of the system being controlled. The ‘training’ interface that creates the model(s) and helps maintain them, is not shown separately
Some way of getting data/commands back into the system to control it (e.g. control commands, trade transactions, purchase orders, recommendations for next action etc.) [Interface <2> Figure 1]
Supervision capability which allows the ‘creators’ and ‘maintainers’ of the ‘brain’ to evaluate its performance and if required manually tune the system using generated data [Interface <3> Figure 1] – this itself is another Brain (see Recursive Layering)
This is a so called automated ‘closed-loop’ system with human supervision. In such a system the control can be fully automated, only manual or any combination of the two for different types of actions. For example, in safety critical systems the automated closed loop can have cut out conditions that disables Interface <2> in Figure 1. This means all control passes to the human user (via Interface <4> in Figure 1).
A Note about the Brain
The big fluffy cloud in the middle called the ‘Brain’ hides a lot of complexity, not in terms of the algorithms and infrastructure but in terms of even talking about differences between things like ML, DL and AI.
There are two useful concepts to use when trying to put all these different buzzwords in context when it comes to the ‘Brain’ of the system. In other words next time some clever person tells you that there is a ‘brain’ in their software/hardware that learns.. ask them two questions:
How old is the brain?
How dense is the brain?
Age of the Brain
Age is a very important criteria in most tasks. Games that preschool children struggle with are ‘child’s play’ for teenagers. Voting and driving are reserved for ‘adults’. In the same way for an automated system the age of the brain talks a lot about how ‘smart’ it is.
At its simplest a ‘brain’ can contain a set of unchanging rules that are applied to the observed data again and again [so called static rule based systems]. This is similar to a new born baby that has fairly well defined behaviours (e.g. hungry -> cry). This sort of a brain is pretty helpless in case the data has large variability. It will not be able to generate insights about the system being observed and the rules can quickly become error prone (thus the age old question – ‘why does my baby cry all the time!’).
Next comes the brain of a toddler which can think and learn but in straight lines and that too after extensive training and explanations (unless you are a very ‘lucky’ parent and your toddler is great at solving ‘problems’!). This is similar to a ‘machine learning system’ that is specialised to handle specific tasks. Give it a task it has not trained for and it falls apart.
Next comes the brain of a pre-teen which is maturing and learning all kinds of things with or without extensive training and explanations. ‘Deep learning systems’ have similar properties. For example a Convolutional Neural Network (CNN) can extract features out of a raw image (such as edges) without requiring any kind of pre-processing and can be used on different types of images (generalisation).
At its most complex, (e.g. a healthy adult) the ‘brain’ is able to not only learn new rules but more importantly evaluates existing rules for their usefulness. Furthermore, it is capable of chaining rules, applying often unrelated rules to different situations. Processing of different types of input data is also relatively easy (e.g. facial expressions, tone, gestures, alongside other data). This is what you should expect from ‘artificial intelligence‘. In fact with a true AI Brain you should not need Interface <4> and perhaps a very limited Interface <3> (almost a psychiatrist/psycho-analyst to a brain).
Brain density increases as our age increases and then stops increasing and starts to decrease. From a processing perspective its like the CPU in your phone or laptop starts adding additional processors and therefore is capable of doing more complex tasks.
Static rule-based systems may not require massive computational power. Here more processing power may be required for <1>/<2>. to prepare the data for input and output.
Machine-learning algorithms definitely benefit from massive computational powers especially when the ‘brain’ is being trained. Once the model is trained however, the application of the model may not require computing power. Again more power may be required to massage the data to fit the model parameters than to actually use the model.
Deep-learning algorithms require computational power throughout the cycle of prep, train and use. The training and use times are massively reduced when using special purpose hardware (e.g. GPUs for Neural Networks). One rule of thumb: ‘if it doesn’t need special purpose hardware then its probably not a real deep-learning brain, it may simply be a machine learning algorithm pretending to be a deep-learning brain’. CPUs are mostly good for the data prep tasks before and after the ‘brain’ has done its work.
If we were to have only interfaces <1> and <3> (see Figure 1) – we can call it an analytics solution. This type of system has no ability to influence the system. It is merely an observer. This is very popular especially on the business support side. Here the interface <4> may not be something tangible (such REST API or a command console) all the time. Interface <4> might represent strategic and tactical decisions. The ‘Analytics’ block in this case consists of data visualisation and user interface components.
To enable true automation we must close the loop (i.e. Interface <2> must exist). But there is something that I have not shown in Figure 1 which is important for true automation. This missing item is the ability to process event-based data. This is very important especially for systems that are time dependent – real-time or near-real-time – such as trading systems, network orchestrators etc. This is shown in Figure 2.
Note: Events are not only generated by the System being controlled but also by the ‘Brain’. Therefore, the ‘Brain’ must be capable of handling both time dependent as well as time independent data. It should also be able to generate commands that are time dependent as well as time independent.
Recursive Layering is a powerful concept where an architecture allows for its implementations to be layered on top of each other. This is possible with ML, DL and AI components. The System in Figures 1 and 2 can be another combination of a Brain and controlled System where the various outputs are being fed in to another Brain (super-brain? supervisor brain?). An example is shown in Figure 3. This is a classic Analytics over ML example where the ‘Analytics’ block from Figure 1 and 2 has a Brain inside it (it is not just restricted to visualisation and UI). It may be a simple new-born brain (e.g. static SQL data processing queries) or a sophisticated deep learning system.
The Analytics feed is another API point that can be an input data source (Interface <1>) to another ‘Brain’ that is say supervising the one that is generating the analytics data.
So next time you get a project that involves automation (implementing or using) – think about the interfaces and components shown in Figure 1. Think about what type of brain do you need (age and density).
If you are on the product side then make sure bold claims are made, not illogical or blatantly false ones. Just as you would not ask a toddler to do a teenagers job, don’t advertise one as the other.
Finally think hard about how the users will be included in the automation loop. What conditions will disable interface <2> in Figure 1 and cut out to manual control? How can the users monitor the ‘Brain’? Fully automated – closed loop systems are not good for anyone (just ask John Connor from the Terminator series or people from Knight Capital https://en.wikipedia.org/wiki/Knight_Capital_Group). Humans often provide deeper insights based on practical experience and knowledge than ML or DL is capable of.
It is encouraging to see availability of recyclable packaging such as plastic wrappers, cans and food containers. But we see the problem of incorrect disposal, littering and lack of waste segregation everywhere (here I believe developed and developing countries are alike).
What incentive can the public be given to not only correctly dispose off their litter but also to pick up after others?
One common method has been the use of bottle/can bank where you return empty bottles and/or cans and you get some money in return.
My idea is to extend this and making it streamlined.
Concept is simple.
All packaging to be uniquely identified using RFID/barcode/QR code etc. – this should identify the source of the packaging and the unique package itself. Something like a bar-code
Everyone buying packaged items has a Recycle Card (app or physical)
(Optional) People buy items using electronic cash (e.g. credit cards) – to attach personal details
Person scans the item (and the package code is also scanned alongside) – over time these could be the same code.
Alongside the bill, a full list (electronic) is provided on the app for Recycle Card of all the packaging you have purchased (when you purchased the product).
The ‘value’ of that packaging in terms of the local currency will also be shown.
Upon successfully recycling the packaging, a part of that ‘value’ will be credited to the person. This can be a monthly or weekly process.
Any litter found is scanned. The full ‘value’ along with a small fine is debited from the associated Recycle Card. The Recycle Card of the person who found the litter and correctly disposed it gets a small credit applied to it.
This means we recognise the value (in terms of money) of the packaging and not just the contents. This I believe is partially happening where ‘green’ products with innovative packaging attract a premium prices.
Furthermore we should attach a loss (again in terms of money) with improper disposal of the packaging. That is done only through fines but without direct accountability.
There are two important steps here:
Detecting successful disposal: This should be automated probably at the recycling centre some sort of machine which can scan and tally the packaging and indicate which Recycle Card should be credited. Packaging is unlikely to arrive intact at the Recycling centre. Therefore multiple markers need to be provided. RFIDs are a good solution but may be too expensive for regular use. One option is a dye that exhibits florescence under certain light. This would give a code that can be detected using machine vision. This is similar to the Automatic Number Plate Recognition software that has become very popular at parking lots, toll plazas and petrol pumps.
Registration of the Recycle Card: This should be a global system. Mainly because the problem of plastics and other packaging materials will impact everyone. Especially if these end up in our Oceans. People should be obligated to correctly dispose packaging where-ever they are in the world. Those who do so should be rewarded and those who don’t penalised. To ensure this – every pieces of packaging must be uniquely identified. This is a big task and I am sure there will be manufacturers (perhaps small/medium sized ones or from the informal sector) who will no follow this system (at least in the beginning due to cost etc.). But the idea is to target the 80% before we target the 20%. In the sense that big companies like Unilever, Nestle, etc. and fast-food joints like McDonalds have the capacity to upgrade their packaging. These are also mass-consumption products. So it would have a noticeable impact.
Do let me know what you think about this idea!
Somewhere in there there are few good machine learning and big-data use-cases. 🙂
Agile methods are iterative and incremental. This, in theory, should prevent implementation death marches which end up with products that do not meet the customer’s needs.
Waterfall on the other hand, is all about having predictable stages with clear milestones at the end of each stage. There is no concept of iteration or increments. All of a stage (like design) are done, validated and only then is the next stage started. The concept here is that validation at the end of each stage keeps implementation aligned.
Unfortunately, both say nothing about the twin human-factor problems of over-excitement and incompetence of the people involved.
Often well understood and repeatable projects (like building a house) follow a waterfall.
Agile suits more ‘non-repeatable’ projects such as building a software product where each product will have its own challenges, risks and ‘hidden dangers’ – while being driven by changes in the ‘business’ environment. Therefore it is very important, when doing Agile for new and innovative products to:
give clear guidance about what is working and what is not working back into the process
keep overall focus on the problem that the product is attempting to solve (i.e. always keep in sight that gold paved road to sales)
If Agile allows software to ‘flow’ freely, then it needs a proper pipe for it to reach its destination (i.e. the hands of the customer). If the pipe shape keeps changing, or has leaks in it there is no way the software will reach the right destination.
One thing, very easy to do (especially in a startup environment) is to get too excited about the problem domain without staking out what specific parts of that domain the product addresses. What is even worse is not sticking to it once identified! This is because Agile methods can be abused to hide (but not ultimately solve) problems with changing requirements and scope creep – resulting in big failures.
This process is made more difficult by the fact that for a new product idea one needs to find a gap in the market. The irony is: bigger the gap you find (Total Addressable Market – TAM) – more funding are you able to attract on the basis of future demand for your product – bigger the promises made – higher is the chance of getting lost in multiple interesting aspects of the ‘gap’ especially if there are conflicting views in management or if the gap area is not well understood.
Here multiple sources of tension exist: what constitutes a businesses view of the minimum viable product (MVP) and how is it different from the potential customers view (ideally both should be the same)?
The answer to the question ‘which part of the MVP should we do first’ – is the launching point for the Agile process. Ideally – a set of features are decided and then iterative and incremental development starts. As long as there is tough resistance to the business asking for massive changes to the path and clear feedback into the development from the ‘customer’, the end result should be aligned with what initial expectations.
I believe for big gaps where both investors and company owners see big $$$ signs, the so called ‘disruptive innovation’ – it may be a difficult thing to start off with Agile and maintain the discipline of clear feedback and clear definitions of done in terms of the MVP. In such a case it may be good to start of in a waterfall model – with low expectation of success, and then do Agile. Hopefully with one attempt at waterfall, one will end up with a product that can be put against the MVP concept, deltas calculated and then fed into an Agile process to be filled incrementally.
Why start with waterfall? Because waterfall imposes a strict condition of no-iteration. So it is more difficult to abuse it. It forces you to commit to requirements to do some design. To commit to design to write some code and so on. And as I said – in the end it gives you a good target to destroy when you start the Agile method. It can also give strength to push back on requirement changes and scope creep later on.
One may say that it is a waste to do waterfall. But one must remember in an Innovation, new product environment usually the target is not to ‘boil the ocean’. So it may be possible to quickly attempt waterfall to get a starting point for Agile. Also in most innovation environments, the initial team size and skill distribution does not allow for a proper Agile implementation in any case. For example it is not typical to find abundant testing or quality assurance resources.