The internet and technologies associated with it, are great social, economic and political leveler. Facebook, twitter, online publications have democratized knowledge. They have changed how we perceive power structures. With the internet, information is no longer the monopoly of the few in the power hierarchy.
But as much as we are impressed with the ubiquitous technology, there is also the worry that the internet can divide people. The internet is smart enough to take note of our past postings, our likes, the websites we visited and then feeds us with new information based on our past behaviour. Thanks to big data and sophisticated algorithms we are being nudged to particular sites or ideas. The mathematical reasoning that is behind algorithm is so smart that it can direct us to websites or forums based on previous internet engagement. Such is the intelligence that the technology is able to make predictions on possible sites that we might be interested.
But what could come out of this intelligence is that users become fixated on particular ideas. Without them being aware of it, users are being silo-ed to particular sites, discussions, forums and greeted with micro-targeted advertisements. Social media platforms, travel platforms, news portals for instance are creating virtual communities, distinct communities with particular ideology and preference for events or activities. They have created virtual tribes, promoting echo chambers and confirming bias among its members. There can also be times when these sites become purveyor of false information.
There is no running away from the fact that big data and algorithm will become an integral part of our life. Many things that we do now use algorithms and big data. Systems that involve job application, checking immigration requirements, criminal processing or education application are now generated by big data and algorithm. Effective as they may be, we should not become passive end users. As much as we would like to believe that such sophisticated systems is fair, efficient and scientific – because they are founded on data ( which is perceived as value free) and that the making of algorithm is based on objective reasoning and hence free of bias – there could be exceptions. This is because designing algorithms and harvesting data involve human judgement. Given the element of subjectivity, there is always the possibility that data and algorithm are harvested and designed to suit particular objectives.
We should not assume that data science – whilst able to solve myriad problems – will always work for us; at times, big data can work against us. Your chances of employment could disappear if you are evaluated the wrong way by pre-employment appraisal. Your chances of citizenship could be denied because the evaluation system is based on a set of algorithms and data that are backed by certain assumptions and objectives that could work against you. Your chances of obtaining any forms of social security could be dashed because the evaluation process may not fit your profile. What could be worst is that there is the tendency not to refute the assessment. Not only is the assessment normally viewed as free from bias due to sophisticated mathematical reasoning, proprietary rights also demand that the process remains non-transparent. Cathy O’Neil, the author of the book Weapons of Math Destruction, says that data science sorted winners and losers.
Indeed, over reliance and unquestioning confidence on the ability of algorithm and big data to solve many problems, could see us having a more divided world; between those who could game the system and those who do not. Preventing this requires us to be aware that big data and algorithm are as much about exacting mathematical reasoning as they are about human judgement. We need to appreciate that designing algorithms involves selecting and defining data to serve certain objectives. That is not exacting science, and certainly one that requires greater scrutiny and accountability.