TicTacToe: Playing Against the Machine

TicTacToe is a simple game but it does have some tactical depth. With limited moves and turns it can be easy to create exhaustive search based solutions to implement the logic for a Player vs Computer game.

It is also the perfect game for anyone who wants to start with AI programming (e.g. Reinforcement Learning). Keeping this in mind I created a test-bed with two ‘brains’ – one that uses Induction (taking advantage of the finite state-space) and the other that uses Pattern Matching (taking advantage of the fixed grid nature of the game).

The code in python can be found on github:

https://github.com/amachwe/tictactoe

Here is the readme that explains how to use it:

https://github.com/amachwe/tictactoe/blob/main/README.md

Currently neither the Induction or Pattern Matching brain really learn anything. This is because we can do an active search of the full state space to plan each move (given the relatively small size of the state space).

Patterns in the Pattern Matching brain are provided as ‘knowledge’ but these can just as well be learnt using Reinforcement Learning methods.

Have fun playing the game or using it as a test-bed for your own AI Tic Tac Toe playing brain!

Feel free to comment to ask questions/provide suggestions!

Almost a Customer!

Letting people know that you have something interesting to ‘sell’ is just the first step. Once they learn about your product there is still the small matter of completing the sale and delivering the service/item purchased to the customer.

I wanted to share a recent experience I had as a potential customer. I learnt about a product through an ad on a social media platform. They were also advertising an offer on the product for £7.50. This was a small vendor selling a speciality product.

When I clicked through to the checkout I got a rude shock. The total price was now showing as £11.50! They had added £4 for delivery. I could get free delivery if my order was more than £20 but I didn’t want to order that many items! I decided that the value I was getting was not matching the total cost I would have to pay (so called ‘net value’ = – £4). So I decided to abandon my ‘full’ basket.

This is called the ‘Abandoned Basket’ problem – and it is seen in bricks-and-mortar stores as well where people simply leave a shopping basket with items and leave the store.

So one might have thought that is the end of the story? But no! Things have become a lot more ‘technical’.

A few hours later I got an email. Before showing me the real price or anything that might scare me away they had taken my contact details! That means they could try and change my mind at a later date. Unlike in a bricks-and-mortar store an e-retailer can chase after prospective customers (GDPR notwithstanding).

The email was not a normal ‘you have items in your basket – click here to complete your purchase’. No way. They were a lot cleverer than that. They had done their research. The email identified high-delivery costs as a common reason why people don’t complete their purchase. It also attempted to justify £4 worth of shipping costs when the item was coming from within UK (I don’t know why?).

But I was still not convinced and I ignored the email. Then a few hours later I received another email. This one was offering me £3 off if I spent at least £10. This meant my net value went from – £4 to -£1 and I did not need to spend a lot more than what I was willing to.

In the end they successfully converted an abandoned basket into a sale and I received the items on time and in good condition!

We can see three main elements in this ‘success’ story:

  1. Getting a foot in the door by capturing customers details before they can ‘run away’ – this gives them a second chance at converting the customer
  2. Understanding what made the customer run away in the first place and attempting to arrive at an acceptable ‘middle-point’
  3. Ensuring that the product/service delivery is pain free to encourage the customer to order again

The Brave New World of Debt-to-GDP Ratio!

What is the Debt-to-GDP Ratio?

Simply put, it is the ratio of all the government debt carried by a country divided by the productive capacity of the country. This is similar to the debt-to-earnings ratio used to evaluate the financial worth of companies as well as individuals.

Typically if an individual or a company has a bad debt-to-earnings ratio they will find it tough to get a loan or attract investment. But debt-to-GDP doesn’t work the same way because not all countries are the same!

How does the Debt-to-GDP Ratio work?

Debt by itself is not bad. Similarly, a rising debt-to-GDP ratio may not be a bad thing. Why? Because borrowing is not bad if it leads to a growth in productivity. Productivity here is linked with one or more objective measures like income growth, we are not talking about subjective measures like ‘personal’ growth.

For example, if you as an individual borrow money to buy a car that you will drive as a taxi during the weekends as a second job, then while your debt has increased so has your income (assuming everything goes well). As long as your income (which can be used to measure productivity) increases faster than your debt, things will be fine. Obviously, individuals are limited by how much they can increase their income within the time frame of the borrowing. But when it comes to a country the limits are lot more relaxed. A country can always find productive uses for the money it borrows. Some examples include: strengthening infrastructure, improving education and improving connectivity (both national and international).

If productivity of an individual or a country increases faster than debt then they become an attractive target for future loans.

The flip side is more interesting! If a person spends the borrowed money in meeting day-to-day expenses then it is unlikely their income will rise faster than the debt, if it rises at all. Such an individual will find themselves in trouble very quickly with their creditors. When it comes to a country this logic starts to fail. Some countries end up attracting money even if things are bad all over. In fact, they keep attracting money even if they are not doing so well and are at the heart of a global financial crisis!

We can see this clearly in Figure 1 where USA and GBR (UK) have been borrowing heavily. Their debt-to-GDP ratio has a ‘step up’ right after both countries started borrowing to spend their way out of the 2008 Financial Crisis. The interesting thing is that this data is mostly till 2018 and we expect a similar (perhaps larger) ‘step up’ due to Covid-19 relief spending when we review the data for 2020!

Figure 1: Debt-to-GDP ratio for several countries and the Average debt-to-GDP ratio of major developed countries + India, China and South Africa

A Question of Trust

‘In God we Trust’ is the official motto of the USA. For the financial world it is ‘In the Dollar we Trust’. That explains why, where other countries have had massive backlash to high debt-to-GDP ratios in terms of no access to cheap borrowing, rating downgrades and currency devaluations we can see time after time, during a crisis, funds from all over the world flowing into the US financial system allowing it (the US govt.) to borrow cheaply! This is similar to how when facing a storm all fishing boats rush back into the harbour. This is one reason why it was relatively easy for the US to propose borrowing massive amounts of money (some $3 trillion) to support its economy through the Covid-related lockdown and beyond.

There is a similar narrative of stability and productivity around the UK. Always seen as a strong player in the world of financial services and second only to the USA in the financial sector. UK has similarly been borrowing a lot more without the corresponding growth in GDP. The first Conservative Government of David Cameron (2010 onwards) sought to stem the tide of borrowing by introducing ‘austerity’ and ending the massive spending spree of the previous government that was dealing with the 2008 financial crisis. There were all kinds of positive signs that despite the impact of Brexit on growth, the debt growth was coming under control and ‘austerity’ would end for good. All this was before Covid-related lockdown.

Only the data from 2020 will tell the scale of ‘step-up’ in the debt-to-GDP graph.

The Future

If you look at Figure 1 the debt-to-GDP ratio of all countries presented is heading only in one direction – ‘up’! It is either a gentle slop of a hill or a steep step of a plateau. As a point India (orange dots) may seem like the odd-one-out but that is not the case as in the recent budgets the Govt. has been forced to let the deficit widen (data is only till 2018) and also there are doubts as to the true figure of the Govt. debt.

The big tip of the iceberg question is ‘what happens next’? If the US/UK are the safe-harbours what happens when they become less and less safe, especially after Covid? Would that reduce their appeal? What ‘safe harbour’ will all that money seek? When does it become unsustainable? Who are the debt-holders who will take the decision to declare the situation unsustainable? Does a smaller population help in faster recovery?

To give an example, the cash rich economies like China, who have a massive surplus, behave like a fast-food chain. They want you to keep eating more of their food but also not fall ill. Their food is money and the delivery mechanism is through the world of finance. It is in their interest that their target markets are healthy so that they can continue buying from China. Where China cannot find a big market they plant the seeds of one by financing infrastructure projects to improve its access to trade routes. So it will be interesting to see how the net-exporting countries behave over the next 1 year. This also makes the current UK 5G ban on Huawei equipment very interesting.

Blast from the Past

As a final remark I need to mention what one of my favourite economists John Maynard Keynes said about this topic. Politicians remember the first half of his advice: it is fine to run a deficit (i.e. spend more than what you can earn) in times of great need (e.g. the Great Depression). But they forget the rest of his advice that the Govt. must balance the budgets during the times of plenty.

This is common sense. When you have good income levels it is logical to use that to reduce your debts so that in the time of scarcity you have a lower debt burden and more money left for your personal needs.

But this is also political suicide – no elected Govt. would survive if it told people that it was going into austerity mode when things were going well [1]. This is one of the big reasons we see a constant increase in debt-to-GDP across the world as shocks and crisis are never in short supply and it is unpopular to claw-back when things are going well. The thinking here is that if you grow your productivity (e.g. measured by income) fast enough you can always keep getting a bigger loan and stay one step ahead of the debt-collector.

Or if you are ‘big enough and transparent enough’ as a country, people will always be willing to lend to you (what else will they do with their money?).

Sources of Data:

https://data.oecd.org/gga/general-government-debt.htm, https://tradingeconomics.com/india/government-debt-to-gdp, https://tradingeconomics.com/china/government-debt-to-gdp, https://tradingeconomics.com/south-africa/government-debt-to-gdp

Notes:

[1] Two examples where this did not happen from the World’s largest democracy India: demonetisation and maintaining high domestic fuel price when internationally crude oil prices have fallen. In both the cases strong steps were taken by the elected Govt. and they still came back to power with a larger majority. Unfortunately, in both the cases Govt. managed to loose the advantage gained from these tough steps due to mismanagement.

Let’s Not Waste a Crisis!

The ongoing COVID-19 related suppression of economic activity will impact incomes across the board. Irrespective of how the income is generated (e.g. business, employment, self-employment) the impact can be either positive, negative or uncertain.

  • Positive for those whose incomes are not disrupted or are increased due to demand (e.g. PPE manufacturers, health-care staff, delivery drivers).
  • Negative for those whose incomes have been disrupted without any relief in sight (e.g. restaurants, people who have been laid off with bad prospects for getting another job).
  • Uncertain for those who have been furloughed or laid off but with good prospects for getting a job.

With anything between 6-11% contraction predicted, the majority of the cases should fall in the ‘Uncertain’ category (I predict 4-7%) who will move to either Positive or Negative category over the next year or so.

Why do I say that?

I say it because there will be different responses to the challenges, from restructuring, process improvements to failing fast and even retraining/reskilling (both at individual level and at an organisational level). Depending on how effective a business is at transforming itself to survive, a lot of the people in the ‘Uncertain’ category will quickly transition to the ‘Negative’ category.

One of the main transformation patterns is to carry out process improvements/restructuring with increased automation so that costs decrease and production/service elasticity increases as incomes fall initially but then recover over the medium and long term.

This group of people who jump from Uncertain to Negative is the BIG problem as this can trigger a long term contraction in consumption. How can we help these people reskill and retrain so that they can re-enter the job market? What can we do to support people as the pressure to automate increases as business income contracts?

Universal Basic Income

One possible answer to many of these questions is Universal Basic Income. If we provide people guaranteed support with basics (e.g. food, rent) then we are not only cutting them some slack but also decoupling ‘survival’ with ‘growth’.

Universal Basic Income (UBI) is a simple concept to understand: all citizens get a basic income every month irrespective of how much they earn. This is guaranteed from the day they turn 18 till the day they die. They may also get a smaller percentage from the day they are born to help their parents with their upkeep.

See this TED Talk by Rutger Bregman for more on this: https://www.youtube.com/watch?v=aIL_Y9g7Tg0

With UBI a recession will not impact the basics of any household. It will provide a safety net for families and individuals. It will also allow people to develop their skills and innovate.

There are a few wrinkles in this. Firstly, how should we prevent inflation as ‘free money’ is handed out to people? One proposed mechanism is to use a different class of money from the currency of the country. This UBI money cannot be used as a store of value (i.e. can’t be lent for interest), just for limited exchange (e.g. food, rent). This is similar to the US Supplemental Nutrition Assistance Program (SNAP) – also known as ‘food stamps’ (https://en.wikipedia.org/wiki/Supplemental_Nutrition_Assistance_Program)which can be exchanged for certain types of food. Many other countries have tried this experiment (such as Finland, USA, Canada etc.). This form of money should also ‘expire’ periodically so that people don’t start using them in a ‘money-like’ way.

Another challenge is how do you convert the ‘temporary’ UBI money into ‘permanent’ currency. This is required for the businesses accepting UBI money to be able to pass it down the supply chain (both locally and internationally). For example if you buy all your groceries with UBI money and it is not convertible to currency then how will the grocery shop pay it’s staff and suppliers. What if the suppliers were importing groceries from other countries – how would they convert UBI money to any international currency. In SNAP, the stamps are equivalent to money. It doesn’t have the same impact as UBI as its cost is a fraction of the total US GDP (0.5%).

Still, one should never let a good crisis go to waste! Time to think differently.

What Happens Next?

In this post let us think about what happens next as we start to come out of the Covid-19 related lockdown.

No country can claim to be immune from the economic effects of the Covid-related lockdown. However, as countries start to emerge from the lockdown some will rebound faster than others.

What is happening now?

Let us next look at where we are today. Today, large number of people and businesses have seen the flow of money reduce to zero. The expectation of a return on investment is low for a large section of the economy. That said, certain sectors are doing quite well or as normal (e.g. groceries, online retail) as they are getting overflow business.

In this situation with little or no money going to people / businesses someone has to step in and be that ‘credible borrower’ and borrow on behalf of those who are struggling. This is the Government as the ‘credible borrower’ which then passes the borrowed money on to its citizens in a low-waste manner one hopes. One point here is that it is easy for a Government to print money rather than borrow, but that can lead to inflation without actual growth – so called ‘jobless growth’.

We can take the current situation as artificial suppression of demand and supply (as people loose incomes and stores are forced to close/reduce visitors).

This can also be understood as a scenario where blood supply to an organ in the body has been blocked. The body reacts in the very short term by reducing the function of that organ and rushing out chemical support to suppress pain but in the long term the body is severely impacted unless the block can be removed and/or another path can be found to deliver the required quantity of blood.

What happens Next and How to Deal with it?

It all comes down to effective planning and effective use of people, processes and tools.

Businesses that have or are able to quickly get the required plans in place for short and long term changes to how they work will benefit from overflow business.

People who are able to re-skill or move from impacted areas to areas of new opportunity will be able to benefit from continued employment during the rebuilding period.

Both the above things should allow some blood to flow to the organ but it does not restore normal supply nor fixed the original damage that resulted in a block.

Repairing the Damage

The repair will start once the lockdown ends. Those countries that release the lockdown earliest (and are able to ride the second wave of infections) will have ‘first movers’ advantage towards normalisation. This should also promote local business that step in to fill the gap from imports where possible.

The key point to keep in mind here is that we will not go back to status quo. Just as scar tissue is never as smooth as the torn skin it replaces. We will loose some businesses. Some people will fall into debt and be unable to recover without help.

Due to loss of incomes, social distancing and widespread work-from-home we will find demand continues to be suppressed for some time to come. This will be especially true for ‘non-essential’ goods. This means the suppressed demand must be unlocked using some of the options we will discuss below.

Who sinks/swims is down to how they prepared during the crisis for the post-crisis period (i.e. if they did not look to change business-as-usual and let a good crisis go to waste then they will sink) and how effectively they can implement those strategic plans in the coming months. This is a good example of Darwin’s Survival of the Fittest.

Who will survive:

  1. Those who are quick to plan and implement new processes that allows them to generate revenue.
  2. Those who have deep pockets to fall back on, for the next 12 months (at least)
  3. Those who are able to focus on their strengths and optimise resources – when we look at (2) we must remember “Markets can remain irrational for longer than you can remain solvent” (by John Maynard Keynes)
  4. Those who are directly benefiting from the crisis (short term survival)
  5. Those who enjoy a good name in the market or are ‘expected’ by the market to bounce back quickly

But what is the Recipe for Success? What should we do more of as a business?

  1. Advertise: Replace front-office with a slick website, smartphone app and/or virtual agent (even a chat-bot helps handle the first level of queries)
  2. Process transformation: Reduce the need for manual processes in business operations – this is not something only multi-million pound business need to do! In fact this is something everyone needs to do!
  3. Digitise and Automate as much as possible – from fundamental building block apps (e.g. billing) to more advanced planning, optimisation and prediction apps (Here is a golden chance for AI at the lower price-point. Or even local AI consultancy)
  4. Concentrate on strengths and focus your resources on the service/product that provides the greatest rewards – enable home delivery where possible – smart phones + hybrid/electric vehicles should reduce cost of operations and bring home delivery to the same price point as in-store
  5. Don’t stop innovating.. innovation is the hidden strength of any business (large or small!)

As an individual, facing an uncertain future in terms of employment, lot of the above points are just as relevant (once the context is changed):

  1. Advertise your existing skills and experience (make a website, LinkedIn profile), talk about your interests and hobbies! Blog!
  2. Look inward: Look at all the good stuff you have done, all the mistakes you have made and the lessons you have learnt. Try changing something small about yourself that you feel will improve the way you feel about yourself. For me this was ensuring I take in a lot of outdoor play time with my kids!
  3. Prepare your tools: make a CV, take stock of where you have been and where you want to get too! You won’t get another chance like this to plan your career!
  4. Concentrate on your strengths: reduce expenditure, improve efficiency by doing the important things and ignoring things that lead to waste of time, money or both. One personal example: we started cooking more at home which resulted in not only money saving but also us discovering new things that we could make at home!
  5. Don’t stop learning! Now is the time to take a risk. Make sure you use all the tools available to engage with people who are leaders in your field of learning as well as fellow students – this can be anything – from cooking to a language
  6. Don’t stop thinking and creating. Write a short story, create a new dish, draw a picture, change the layout of your living room! These act as massive confidence boosters

Additional Thoughts: Automation

Automation was on the rise before Covid. The bigger players have already moved online and use automation enabled IT therefore continue to sell effectively (albeit within constraints). But the contact-less nature of the solution to this problem will push app/online interaction even more. As this happens, it makes it easier to automate the interaction. Two small examples:

  1. Pizza shops now only support cashless delivery, no collection. Therefore, all my interaction with the pizza shop is through their website or an app (e.g. JustEat). The pizza is placed on my doorstep and I hardly even see the delivery person as they back away more than 2 meters and leave as soon as they see me pick-up the pizza.
  2. Food stalls in various food markets have started home deliveries (again cashless and contact-less). Earlier they would hire staff to manage long queues, today they operate behind a slick website (that you can throw up in a few hours), a scheduling tool, and WhatsApp messaging to personalise the interaction.

This effect when combined with the long term trend of more people working from home (which is bound to accelerate now) is an opportunity for small business to deliver local services through different app-based platforms involving lots of automation (to make it cheaper). The smaller players have to make use of the same force-multiplier tools, platforms and channels as the bigger players right now! The most basic one is the ability to accept online orders and payments.

Now that people don’t travel for work then they no longer form a captive market for food vendors, coffee shops and bars. But these things can come to their doorstep! With automation enabled IT the cost of home delivery can be managed especially with the added benefits of scale.

Finally: I am still waiting for the day I can order Starbucks coffee to my home for the same price (if not cheaper) as what I get in the stores. Starbucks could open coffee-making kitchens in different areas and serve the area from there. Automation will help by providing seamless links between different stages, AI-based planning and prediction of demand.

The Advantage of Covid-19

Covid-19 has been wreaking havoc across the globe. But this was also expected given the fact that we have not been the best of tenants for Mother Earth.

All the doom and gloom aside, Covid-19 and the mass lockdowns are teaching us a very important lesson about the future of automation and technology.

In a single line:

A secure future requires smart people working on smart devices using smart infrastructure!

Figure 1: Relation between Smart People, Things and Infrastructure.

Figure 1 shows the interactions between Smart People, Things and Infrastructure.

The Covid-19 crisis, which has brought life to a standstill, has exposed the weakness of our automation maturity. Services from haircutting to garbage collection have been trimmed back, mostly as a proactive step. Whatever automation we do have, has helped tremendously (e.g. online grocery shopping) even as people’s behaviour changed overnight as panic set in.

So what is the panic about? What are the basics that we need? The panic is about running out of resources like food due to a collapse of supply chains which have been optimised to reduce warehousing costs.

Supply chains (Logistics) are heavily dependent on people. From farmers growing crops, workers building stuff to drivers transporting the product to the shops (or directly to your home).

This is not the only critical system to break down if large number of people fall ill at the same time.

Healthcare is another area that has been impacted because of the lockdown. Care has to maintained to protect vulnerable people which means minimising contact. This increases the vulnerability due to isolation.

Education has also been impacted with schools closed and exams postponed or cancelled. This might not seem like a big problem but consider the impact in future results.

Another area of concern are the utility networks. Can we truly survive with disruptions to our electricity or water networks?

If the automation is improved in the above areas then we would become more resilient (but not immune) to such events in the future which is as difficult to achieve as it sounds!

Bottom-up Automation

Before a drone can be piloted remotely for hundreds of miles or a truck driven under human supervision from a port to a local warehouse we need robust telecom infrastructure to provide reliable, medium-high bandwidth, low-latency, temporary data connections.

This magic network has three basic ingredients:

  1. Programmable network – devices that can be treated like ‘software’ and provide the same agility > significant progress has already been made in this area.
  2. Network slicing – to efficiently provide the right resource to the requesting service > lot of work ongoing in context of 5G networks
  3. Closed-loop, light touch orchestration – to help people look after a complex network and help make changes quickly and safely when required (e.g. providing a reliable mobile data link to a drone carrying a shipment of food from a wholesaler to a shop, for remote piloting use-case) > significant progress has been made and lot of ongoing work

Using such a network we can build other parts of the puzzle such as smart roads, smart rails and then smart cities. All of these help improve automation and support increasingly light touch automation use-cases.

Smart Things

Once we have the Smart Infrastructure we need Smart Things to use them.

For Logistics and maintaining a robust supply chain during a pandemic we need a fleet of autonomous/remotely supervised/remotely piloted vehicles such as heavy-lift drones, self-driving trains/cars/ships/trucks. We also need similar assistance inside warehouses and factories with robots carrying out the operations with human supervision (so called Industry 4.0 / Lights-out factory use-case).

Healthcare – requires logistics as well as the development of autonomous personal health monitoring kits that augment the doctor by allowing them to virtually examine a patient. These kits need to become as common as a thermometer and should fulfil multiple functions.

For scenario related to caring for vulnerable people, semi-autonomous robots are required that can do lot of the work (e.g. serve dinner).

In case of a lockdown, a teacher should be able to create virtual classrooms with similar level of interactivity (e.g. via AR/VR) as in a real classroom.

To maintain water, electricity and other utilities we need sensors that provide a snapshot of the network as well as actuators, remote inspection and repair platforms etc.

For all of this to be done remotely (e.g. in a lockdown scenario) we need a robust telecoms network. Clearly, without a data connection people would no longer be able to deal with the economic, mental, physical and emotional shock caused by a lockdown.

Smart People

So who will be these people who can pilot/supervise a drone, carrying a crate of toilet rolls from a warehouse in Bristol to a shop in Bath from a remote location? Well trained people of course!

This requires two important things:

  1. Second Job: Everyone should be encouraged to take up a second discipline (of their interest) in a semi-professional capacity. This helps increase redundancy in a system. For example, if you are a taxi driver and have an interest in radio – maybe your second job can be of a maintenance technician.
  2. Thinking beyond data-science and AI: Tech is everywhere and AI is not the final word in hi-tech. People should receive everyday technology training and if possible advanced technology training in at least one topic. E.g. everyone should be taught how to operate a computer but they should also be allowed to choose a topic for deeper study, like security, software development, IT administration etc.

Augmentation technologies should be made more accessible, including providing basic-training in Augmented and Virtual Reality systems so that in case of a lockdown, human presence can be projected via a mobile platform such as a drone or integrated platform within say a forklift or a truck.

Adaptation: This is perhaps the most important. This means not leaving anyone behind in the tech race. Ensuring all technologies allow broad access. This will ensure that in times of trouble technology can be accessed not only by those who are most able to deal with the issues but also those who are the most vulnerable.

All of the above require the presence of smart things!

Conclusion

Thus we have four themes of Logistics, Healthcare, Education and Utilities running across three layers: Smart People -> Smart Things -> Smart Infrastructure. That is what Covid-19 has taught us. A very important lesson indeed, so that the next time around (and there WILL be a next time), we are better prepared!

Digitisation of Services and Automation of Labour

Digitisation of services is all around us. Where we used to call for food, taxi, hotels and flights we now have apps. This ‘app’ based economy has resulted in a large number of highly specialised jobs (e.g. app developers, web designers). It also impacts unskilled or lower skilled jobs as gaps in the digitisation are filled in with human labour (e.g. physical delivery of food, someone to drive the taxi).

The other side of digitisation is automation. Where manual steps are digitised, the data processing steps can involve human labour (e.g. you fill a form online, a human processes it, a response letter is generated and a human puts it in an envelope for posting it). 

In case of a fully automated and digitised service, processing your data would involve ’machine labour’ (with different levels of automation [see http://fisheyefocus.com/fisheyeview/?p=863]) and any communication would also be electronic (e.g. email, SMS). One very good example of this is motor insurance, where you enter your details via a website or app, risk models calculate the premium on the fly and once payment is made all insurance documents are emailed to you. Only involvement of human labour is in the processing of claims and physical validation of documents. This is called an ‘e-insurer’.

Machine Labour

Automation involves replacing or augmenting human labour with machine labour. Machines can work 24×7 and are not paid salaries – thus the cost savings. However, machines need electricity and infrastructure to work and they cannot self-assemble, self-program or self-maintain (so called Judgement Day scenario from the Terminator series). Human labour is still required to develop and maintain an increasingly large number of (complex) automated systems. Human labour is also required to develop and maintain the infrastructure (e.g. power grids, telecom networks, logistic supply chains) that works alongside the automated systems.

So humans earn indirectly from machine labour but in the end automation and digitisation help save large amounts of money for companies by reducing operational costs (in terms of salaries, office space rentals etc.). Another side-effect is that certain types of  jobs are no longer required as automation and digitisation pick up pace.

Impact on Consumption

Now we know from basic economics that all consumption results in someone earning an income. 

For a company, the income is the difference between the value of what they sell and their total costs (fixed + variable) in making and selling it.

A company will increase digitisation and automation with a view to increase their total income. This can happen by targeting automating processes that increase sales or decrease costs. A company will also automate to keep levels of service so as not to lose customers to competition but there will always be some element of income increase involved here as well.

If costs are reduced by digitisation (e.g. less requirement for a physical ‘front office’) and/or automation (e.g. less number of people for the same level of service), it can lead to loss or reduction of income as people are downsized or move to suboptimal roles (e.g. a bank teller working in a supermarket). This also contributes to the ‘gig’ economy where apps provide more ‘on-demand’ access to labour (e.g. Uber).

People consume either from what they earn (income) or from borrowing (e.g. credit cards and loans). If the incomes go down then it can either impact consumption or in the short term lead to increased borrowing. This decrease in consumption can impact the same companies that sought an increase in income by automation and digitisation.

To Summarise:

  1. Automation and Digitisation leads to cost savings by introducing electronic systems in place of a manual process. 
  2. If less people are required to do the same job/maintain a given level of output then employers are likely to hire fewer new workers and/or reduce the size of the workforce over time. 
  3. This will reduce the income of people who are impacted by redundancies and change of job roles. 
  4. This in turn will reduce the consumption of those people which may hit the very same companies that are introducing automation and digitisation
  5. This in turn will further push the margins and thereby force further reduction in costs or increase in consumption from some quarter…. 
  6. And we seem to be trapped in a vicious circle!

This Sounds Like Bad News!

So looking at the circular nature of flows in an economy, as described in the previous section, we can predict some sort of impact on consumption when large scale digitisation and automation takes place. 

As an aside, this is a major reason why ‘basic income’ or universal income is a very popular topic around the world (read more: https://en.wikipedia.org/wiki/Basic_income). With basic income we can guarantee everyone a minimum lifestyle and thereby promise a minimum level of consumption.

The actual manifestation of this issue is not as straightforward as our circular reasoning, from the previous section, would indicate. This is because the income of a company depends upon several factors:

  1. External Consumption (exports)
  2. Amount consumed by those whose income increases due to automation and digitisation
  3. Amount consumed by those whose income decreases due to automation and digitisation
  4. Labour costs attributed to those who implement and support automation and digitisation
  5. Labour costs attributed to those who are at risk of being made redundant due to automation and digitisation (a reducing value)
  6. Variable costs (e.g. resource costs)
  7. Fixed costs

Exports can help provide a net boost to income – this external consumption may not be directly impacted by automation and digitisation (A&D). It may be indirectly boosted if the A&D activities lead to imports from the same countries.

The two critical factors are (2) and (3): namely how much of the output (or service) is sold to people who benefit from A&D and how much is sold to those who do not benefit from A&D. 

If a company employs a large number of people who can be made redundant via A&D activities and a large portion of their consumers are those whose incomes will be impacted by A&D then we have a very tight feedback loop – which can lead to serious loss of income for the employer, especially if it ties in with an external shock (e.g. increase of a variable cost like petroleum).

On the other hand if a company caters to people whose incomes increase with A&D (e.g. software developers) then the impact to its income will be a lot less pronounced and it may even increase significantly.

What works best is when a company can sell to both and has enough space for both A&D activities and manual labour. This means they can make money from both sides of the market. A good example of this are companies like Amazon, McDonalds and Uber who have human components integrated with A&D which then acts as a force multiplier. 

Using this framework we can analyse any given company and figure out how automation will impact them. We can also understand that in the short term A&D can have a positive effect as it acts as a force multiplier, opening new avenues of work and creating demand for different skills.

Breaking Point

Real issues can arise if automation is stretched further to complex tasks such as driving, parcel delivery and cooking food. Or digitisation is taken to an extreme (e.g. e-banks where you have no physical branches). This will have a large scale impact on incomes leading to a direct reduction in demand.

One way to force a minimum level of consumption is for the government to levy special taxes and transfer that income as it is to those who need it. This will make sure those who are unskilled or have basic skills are not left behind. This is a ‘means tested’ version of basic income similar to a benefits system.

The next step will be to re-skill people to allow them to re-enter the job market or start their own business.

Exploring Stream Processing in Java

Java was a relatively late entrant in the functional programming game. Streams, lambdas and other functional programming constructs were introduced in Java 8. Scala and Clojure had already popularised functional programming while Java was stuck in in the Enterprise App space.

In this post we take a gentle drive through the world of Streams. This will make it easy for you to do quite a few interesting things with them!

Streams are a simple concept to understand. At their most basic they are a different way of processing a list structure with few restrictions on what we can do.

The list is treated as a stream/sequence of items, rather than an aggregate collection while:

  1. Hold no state in the pipeline except the current item
  2. Hold no external state (like counters)
  3. Don’t depend on the size of the stream, just know if it is a finite or infinite stream that you are dealing with
  4. Restrict yourself to a limited set of operators

To do this we do need to think that extra bit to recast our good old for loop into a pipeline of stream operations.

What benefits do we get if we do this? The main benefit, if we hold no state (internal or external) then we can seamlessly parallelize the stream processing. Really, a well written stream processing pipleline will run in parallel mode seamlessly. If it doesn’t then you have either made a mistake in pipeline design or have a use-case that is not suitable for using streams.

One consequence of parallel stream processing is that we may need to reduce the results from the different pipelines in the end to return the result. We can also process the resulting items without returning a single result.

Java Streams have what are called ‘terminal’ methods that do reduction for us. These methods behave differently when the streams are infinite that is why we have point (3) above. The methods are:

  • count(): used to count the number of elements in the stream; will never terminate for an infinite stream as you cannot ever finish counting to infinity
  • min()/max(): used to find the smallest or largest value using a comparator; will never terminate for an infinite stream
  • collect(): used to collect the stream items into a single collection (e.g. a list); will never terminate for an infinite stream
  • reduce(): used to combine stream into a single object (e.g. to sum a stream of integers); will never terminate for an infinite stream

There are other terminal methods that do not generate a result or are not guaranteed to return one:

  • forEach(): used to process all the stream items at the end of the pipeline (e.g. to write processed items into a file); not a reduction because no result is returned; will never terminate for an infinite stream
  • findAny()/findFirst(): used to return the first (in order) or any (first from any of the parallel streams); not a reduction because only a small set of items from the stream are processed; will terminate as we want first or any item (we don’t care to wait for it to terminate)
  • allMatch()/anyMatch()/noneMatch(): used to match on all, any or none of the items; not a reduction because it doesn’t really process the stream items; may not terminate

The code block at the end of this post has examples of all the different functions described above.

Real World Example

Image a marble making machine. It produces an infinite stream of marbles of slightly different sizes and weights. We need to perform certain actions based on weight and size of the marble (e.g. paint them different colours, remove ones that are not in the correct weight or size range) and then pack them based on colours into small boxes and then again pack those small boxes into bigger boxes (of the same colour) to send them to the wholesaler.

A simple linear pipeline looks something like this:

  1. check size and weight, if outside correct range: discard
  2. choose a colour based on weight: apply colour
  3. send them to a different box based on colour

Till step 3 there is no need to maintain any state because the size check and colouring steps do not depend on anything other than the marble being processed at that time. But at the terminal step of the pipeline we do need to maintain some state information about which box is associated with which colour to direct the marble to the correct box. Therefore, we can carry out steps 1 -> 3 in as many parallel pipelines as we want. This will give us a set of boxes (each representing a single colour) per pipeline. Step 1 is a ‘filter’ operation and Step 2 is a ‘map’ operation on the stream.

Since we have made full use of parallel streams to manufacture the marbles and put them into multiple boxes, now is the time to ‘pay the piper’ to get a single usable result that can be dispatched to the wholesaler. We need to ‘reduce’ the boxes by again holding little bit of state (the colour of the big box) but this time we are making a box of boxes.

Worked Example in Java

Assume we have a finite stream of unknown size (infinite streams require a bit more care) of whole numbers (i.e. positive integers) and we want to perform some processing like filtering out odd numbers, finding sum of all even numbers in the stream and so on.

While this is a dummy problem it does allow us to demonstrate all the different aspects of stream processing including two of the most common stream operation: filter and map.

We have a data generator function that takes in upper limit as a parameter and creates a stream of whole numbers (using Stream.iterate) from 1 to the upper limit in sequence. This is done so that we can easily validate the result. Full source code is provided at the end of the post that includes the data generator function.

Filter Operation

Filter operation is similar to writing a loop with an if condition inside it that executes the logic inside the loop only if certain conditions are met.

Java Example:

We want to find out the maximum even number in the stream. For this we use the filter method on the stream and pass a lambda to test the stream value if it is odd or even, all odd values are dropped and then we call the max terminal function.

System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).max((a,b) -> (int)(a-b)));

Output: We set maxItemCount to 100 therefore this will return the result of ‘100’ as that is the largest even number between 1 and 100.

Map Operation

Map operation is used to apply functions to a stream to transform. The apply function should not have any side effects (e.g. calling any APIs) and should produce a return value.

Java Example:

Assume that we want to process the even numbers that we identified in the previous example. In the example below we use map to transform a stream of even numbers (as is the output of filter) into a stream of square of even numbers.

System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).map(x -> {
            if(x%2 == 0) {
                x = x*x;
            }
            return x;
        }).collect(Collectors.toList()));

Output: Since we are using the collect terminal method at the end with a list collector (Collectors.toList) we get a list of square of even numbers between 1 and upper limit (in this case 100).

That’s all for this post! Thank you for reading.

Code for Examples

package test.Stream;

import java.util.stream.Collectors;
import java.util.stream.Stream;
  /*
        Test common stream functions
     */
public class TestStream {

  
    public static void main(String[] args) {

        final int maxItemCount = 100;

        System.out.println(generateData(maxItemCount).parallel().count()); // Result: 100 as we are generating whole numbers from 1 to 100 (inclusive)

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).min((a,b) -> (int)(a-b))); // Result: Optional <2> as 2 is the lowest even number between 1 and 100

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).max((a,b) -> (int)(a-b))); // Result: Optional <100> as 100 is the highest even number between 1 and 100

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).collect(Collectors.toList())); // Result: list of even numbers

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).map(x -> {
            if(x%2 == 0) {
                x = x*x;
            }
            return x;
        }).collect(Collectors.toList())); //  Result: List of squared even numbers

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).reduce(0, (n, m) -> n+ m)); // Result: 2550 - sum of first 50 even numbers = 50*(50+1)

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).findFirst()); // Result: Optional <2> as 2 is the first even number between 1 and 100

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).findAny()); // Result: Optional <some even number> as it will pick from any of the streams

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).allMatch(x -> x%2 == 0)); // Result: true as all numbers are even and therefore divisible by 2

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).anyMatch(x -> x%3 == 0)); // Result: true as there is at least one even number between 1 and 100 divisible by 3 (i.e. 6)

        System.out.println(generateData(maxItemCount).parallel().filter(x -> x%2 == 0).noneMatch(x -> x%101 == 0)); // Result: true as there is no number between 1 and 100 that is divisible by 101



    }
    /*
        Generates whole numbers from 1 to limit parameter
     */
    private static Stream<Integer> generateData(int limit) {
        return Stream.iterate(1, n->n+1).limit(limit);
    }
}

Decoding Complex Systems

I recently read a book called ‘Metldown’ by Chris Clearsfield and Andras Tilcsik (Link: Meltdown).

The book provides a framework to reason about complex systems that can be found all around us (from the cars we drive to processes in a factory). The word ‘system’ is used in the generic sense where it means a set of components interacting with each other. Each component expects some sort of input and provides some sort of output.

The decomposition of a system into components can be done at different levels of detail. The closer we get to the ‘real’ representation more complex can the interaction between components (or sub-systems) get. Imagine the most detailed representation of a computer chip which incorporates within it a quantum model of the transistor!

Let us look at some important points to consider when trying to understand a complex system. These allow us to classify and select appropriate lines of attack to unravel the complexity.

1. Complexity of Interaction

Complexity arises when we have non-linear interactions between systems. Linear interactions are always easier to reason about and therefore to fix in case of issues. With non-linear interactions (e.g. feedback loops) it becomes difficult to predict effects of changing inputs on the output. Feedback loops if unbounded (i.e. not convergent) can lead to catastrophic system failures (e.g. incorrect sensor data leading to wrong automated response – which worsens the situation).

Solution: Break feedback loops with linear interactions. Add circuit breakers or delay in reaction where not possible to break feedback loops.

2. Tight Coupling

When two or more systems are tightly coupled then it is quite easy to bring down all by taking down just one. Slack in the interaction between systems requires a system to be able to deal with imprecise, inaccurate and missing inputs while preserving some sort of functional state.

Solution: Allow clear statement of inputs, outputs and acceptable ranges. Provide internal checks to ensure errors do not cross component boundaries. Provide clear indication of the health of a component.

3. Monitoring

Any system (or group of systems) requires monitoring to provide control decisions. For example, when operating a car we monitor speed, fuel and the dashboard (for warning lights). Any system made up of multiple components/sub-systems should ideally have a monitoring feed from each of the components. But many times we cannot directly get a feed from a component, or it can lead to information overload and we rely on observer components (i.e. sensors) to help us. This adds a layer of obfuscation around a component. If the sensor fails then the operator/controller has no idea what is going on or worse has the wrong idea without knowing it and therefore takes the wrong steps. This is a common theme with complex systems such as nuclear reactors, aeroplanes and stock markets where indirect measurements are all that is available.

The other issue is that when a system is made up of different components from different providers, each component may not have a standard way of providing status. For example in modern ‘cloud enabled’ software we have no way of knowing if a cloud component which is part of our system has failed and restarted. It may or may not impact us depending on how tightly coupled our components are to the cloud component and if we need to respond to any restarts (e.g. by flushing cached information).

Anomalising

While it is difficult to map any system approaching day-to-day complexity to figure out where it can fail or degrade we can use techniques such as Anomalising to make sure cases of failures are recorded and action taken to prevent future occurrences. The process is straight forward:

  1. Gather data – collect information from different monitoring feeds etc. about the failure (this is why monitoring is critical)
  2. Fix raised issues – replace failing/failed components, change processes, re-train operators
  3. Address Root Cause – monitor replaced components, new procedures while making sure root cause is identified (e.g. was the component at fault or is it a deeper design issue? Are we just treating the symptom and not the cause?)
  4. Ensure solution is publicised so that it becomes part of ‘best practice’
  5. Audit – make sure audit is done to measure solution effectiveness

Human Element

As most interesting systems involve a human:

  • operator (e.g. pilot)
  • controller (e.g. traffic controller)
  • supervisor (e.g. in a factory)
  • beneficiary (e.g. patient wearing a medical device)
  • dependent (e.g. passenger in a car)

Then the big question is how can we humans improve how we work with complex systems? Or the other way around: How can complex systems be improved to allow humans to work with them more effectively?

There is a deceptively simple process that can be used to peel back some of the complexity. We can describe this as a ‘check-plan-and-proceed’ mechanism.

  1. Gather how the interaction with a given system has been in the previous time frame (week/month/quarter) [Check]
  2. Create a list of changes to be tried in the next time frame [Plan]
  3. Figure out what can be improved in the next time frame [Proceed]

This allows the human component of a complex system to learn in bite-sized chunks.

This also helps in dealing with dynamic systems (such as stock markets) where (as per the book) it is the weather prediction equivalent of ‘predicting a tornado rather than simply rainfall’. When the check-plan-and-proceed mechanism is abandoned we get systems running amok towards a ‘meltdown’ – be it a nuclear meltdown, stock market crash, plane crash or collapse of a company.

Analytics, Machine Learning, AI and Automation

In the last few years buzzwords such as Machine Learning (ML), Deep Learning (DL), Artificial Intelligence (AI) and Automation have taken over from the excitement of Analytics and Big Data.

Often ML, DL and AI are placed in the same context especially in product and job descriptions. This not only creates confusion as to the end target, it can also lead to loss of credibility and wasted investment (e.g. in product development).

Figure 1: Framework for Automation

Figure 1 shows a simplified version of the framework for automation. It shows all the required ingredients to automate the handling of a ‘System’. The main components of this framework are:

  1. A system to be observed and controlled (e.g. telecoms network, supply chain, trading platform, deep space probe …)
  2. Some way of getting data (e.g. telemetry, inventory data, market data …) out of the system via some interface (e.g. APIs, service endpoints, USB ports, radio links …) [Interface <1> Figure 1]
  3. A ‘brain’ that can effectively convert input data into some sort of actions or output data which has one or more ‘models’ (e.g. trained neural networks, decision trees etc.) that contain its ‘understanding’ of the system being controlled. The ‘training’ interface that creates the model(s) and helps maintain them, is not shown separately
  4. Some way of getting data/commands back into the system to control it (e.g. control commands, trade transactions, purchase orders, recommendations for next action etc.) [Interface <2> Figure 1]
  5. Supervision capability which allows the ‘creators’ and ‘maintainers’ of the ‘brain’ to evaluate its performance and if required manually tune the system using generated data [Interface <3> Figure 1] – this itself is another Brain (see Recursive Layering)

This is a so called automated ‘closed-loop’ system with human supervision. In such a system the control can be fully automated, only manual or any combination of the two for different types of actions. For example, in safety critical systems the automated closed loop can have cut out conditions that disables Interface <2> in Figure 1. This means all control passes to the human user (via Interface <4> in Figure 1).

A Note about the Brain

The big fluffy cloud in the middle called the ‘Brain’ hides a lot of complexity, not in terms of the algorithms and infrastructure but in terms of even talking about differences between things like ML, DL and AI.

There are two useful concepts to use when trying to put all these different buzzwords in context when it comes to the ‘Brain’ of the system. In other words next time some clever person tells you that there is a ‘brain’ in their software/hardware that learns.. ask them two questions:

  1. How old is the brain?
  2. How dense is the brain?

Age of the Brain

Age is a very important criteria in most tasks. Games that preschool children struggle with are ‘child’s play’ for teenagers. Voting and driving are reserved for ‘adults’. In the same way for an automated system the age of the brain talks a lot about how ‘smart’ it is.

At its simplest a ‘brain’ can contain a set of unchanging rules that are applied to the observed data again and again [so called static rule based systems]. This is similar to a new born baby that has fairly well defined behaviours (e.g. hungry -> cry). This sort of a brain is pretty helpless in case the data has large variability. It will not be able to generate insights about the system being observed and the rules can quickly become error prone (thus the age old question – ‘why does my baby cry all the time!’).

Next comes the brain of a toddler which can think and learn but in straight lines and that too after extensive training and explanations (unless you are a very ‘lucky’ parent and your toddler is great at solving ‘problems’!). This is similar to a ‘machine learning system’ that is specialised to handle specific tasks. Give it a task it has not trained for and it falls apart.

Next comes the brain of a pre-teen which is maturing and learning all kinds of things with or without extensive training and explanations. ‘Deep learning systems’ have similar properties. For example a Convolutional Neural Network (CNN) can extract features out of a raw image (such as edges) without requiring any kind of pre-processing and can be used on different types of images (generalisation).

At its most complex, (e.g. a healthy adult) the ‘brain’ is able to not only learn new rules but more importantly evaluates existing rules for their usefulness. Furthermore, it is capable of chaining rules, applying often unrelated rules to different situations. Processing of different types of input data is also relatively easy (e.g. facial expressions, tone, gestures, alongside other data). This is what you should expect from ‘artificial intelligence‘. In fact with a true AI Brain you should not need Interface <4> and perhaps a very limited Interface <3> (almost a psychiatrist/psycho-analyst to a brain).

Brain Density

Brain density increases as our age increases and then stops increasing and starts to decrease. From a processing perspective its like the CPU in your phone or laptop starts adding additional processors and therefore is capable of doing more complex tasks.

Static rule-based systems may not require massive computational power. Here more processing power may be required for <1>/<2>. to prepare the data for input and output.

Machine-learning algorithms definitely benefit from massive computational powers especially when the ‘brain’ is being trained. Once the model is trained however, the application of the model may not require computing power. Again more power may be required to massage the data to fit the model parameters than to actually use the model.

Deep-learning algorithms require computational power throughout the cycle of prep, train and use. The training and use times are massively reduced when using special purpose hardware (e.g. GPUs for Neural Networks). One rule of thumb: ‘if it doesn’t need special purpose hardware then its probably not a real deep-learning brain, it may simply be a machine learning algorithm pretending to be a deep-learning brain’. CPUs are mostly good for the data prep tasks before and after the ‘brain’ has done its work.

Analytics System

If we were to have only interfaces <1> and <3> (see Figure 1) – we can call it an analytics solution. This type of system has no ability to influence the system. It is merely an observer. This is very popular especially on the business support side. Here the interface <4> may not be something tangible (such REST API or a command console) all the time. Interface <4> might represent strategic and tactical decisions. The ‘Analytics’ block in this case consists of data visualisation and user interface components.

True Automation

To enable true automation we must close the loop (i.e. Interface <2> must exist). But there is something that I have not shown in Figure 1 which is important for true automation. This missing item is the ability to process event-based data. This is very important especially for systems that are time dependent – real-time or near-real-time – such as trading systems, network orchestrators etc. This is shown in Figure 2.

Figure 2: Automation and different types of data flows

Note: Events are not only generated by the System being controlled but also by the ‘Brain’. Therefore, the ‘Brain’ must be capable of handling both time dependent as well as time independent data. It should also be able to generate commands that are time dependent as well as time independent.

Recursive Layers

Recursive Layering is a powerful concept where an architecture allows for its implementations to be layered on top of each other. This is possible with ML, DL and AI components. The System in Figures 1 and 2 can be another combination of a Brain and controlled System where the various outputs are being fed in to another Brain (super-brain? supervisor brain?). An example is shown in Figure 3. This is a classic Analytics over ML example where the ‘Analytics’ block from Figure 1 and 2 has a Brain inside it (it is not just restricted to visualisation and UI). It may be a simple new-born brain (e.g. static SQL data processing queries) or a sophisticated deep learning system.

Figure 3: Recursive layering in ML, DL and AI systems.

The Analytics feed is another API point that can be an input data source (Interface <1>) to another ‘Brain’ that is say supervising the one that is generating the analytics data.

Conclusion

So next time you get a project that involves automation (implementing or using) – think about the interfaces and components shown in Figure 1. Think about what type of brain do you need (age and density).

If you are on the product side then make sure bold claims are made, not illogical or blatantly false ones. Just as you would not ask a toddler to do a teenagers job, don’t advertise one as the other.

Finally think hard about how the users will be included in the automation loop. What conditions will disable interface <2> in Figure 1 and cut out to manual control? How can the users monitor the ‘Brain’? Fully automated – closed loop systems are not good for anyone (just ask John Connor from the Terminator series or people from Knight Capital https://en.wikipedia.org/wiki/Knight_Capital_Group). Humans often provide deeper insights based on practical experience and knowledge than ML or DL is capable of.