New-Age Warfare: The Digital Attack

“Data is the pollution problem of the information age, and protecting privacy is the environmental challenge”

Bruce Schneier

As technology progresses, more and more of our information has been moving to the digital world. As a result, cyber attacks have become increasingly common and costly. A data breach is a security incident in which information is accessed without authorization.

Some of the biggest data breaches of the 21st century:

Adobe: As reported in early October of 2013 by security blogger Brian Krebs the breach impacted 153 million user records.

Sina Weibo: With over 500 million users, Sina Weibo is China’s answer to Twitter. However, in March 2020 it was reported that the real names, site usernames, gender, location, and phone numbers for 172 million users had been posted for sale on dark web markets.

Marriott International: Marriott International announced in November 2018 that attackers had stolen data on approximately 500 million customers. The breach initially occurred on systems supporting Starwood hotel brands starting in 2014. The attackers remained in the system after Marriott acquired Starwood in 2016 and were not discovered until September 2018.

LinkedIn: A major social network for professionals, announced in 2012 that attackers were found offering the email addresses and passwords of around 165 million LinkedIn users for just 5 bitcoins.

State Bank of India (SBI): In January 2019, the nation’s largest bank, State Bank of India, left a server unprotected by failing to secure it with a password. Originating from ‘SBI Quick’, a service that provided customers with their transaction and account details, putting about three million text messages in jeopardy.

Kudankulam nuclear power plant (KKNPP) & ISRO: Malware was installed on the computers of India’s biggest nuclear power plant and the county’s apex space agency in September 2019.

Reviewing the hot topic, the data scraping by China, ThePrint’s Editor-in-Chief Shekhar Gupta decoded that a Chinese law was passed in 2017 known as the Inaugural National Intelligence Law, which compels citizens and organizations of People’s Republic of China to assist the country’s intelligence activities. He described the fifth generation warfare as a warfare of ‘perceptions and information’, which is still in ‘unfolding’. He said, “It is also a warfare of cultural and moral perceptions. This is carried out without using any troops on the ground. Fifth generation warfare has to be fought outside, essentially behind keyboards.” Since, Galwan Valley clashes, cyber-attacks by Chinese hackers have seen a massive surge in India. In the backdrop of tensions on the Line of Actual Control the Indian government banned 118 applications. However, threats to privacy continue to linger on.

The bloatware problem, bloatware may be defined as a set of pre-defined applications that cannot be uninstalled or even disabled from the mobile phone. Several mobile manufacturers keep the price of their device low because they compensate for reduced profits on the sale of devices by making additional profits through these third-party apps. The problem is particularly acute with some of the Chinese mobile manufacturers. Xiaomi, for example, by some estimates, earned 9.1% of its revenue in 2018 through these pre-loaded apps and services. Apart from consuming unnecessary space on the phone and draining the battery, these apps pose serious security threats because they collect user data in surreptitious ways that can easily be misused.

What data is being collected?

A paper titled ‘An analysis of Pre-installed Android Software’ by researchers at the IMDEA Networks Institute brings forth a lot of information on these pre-installed applications. They have custom permissions which allows them privileged access to system resources. They also include third party libraries embedded in them. There are great challenges to safeguard the privacy of the user specially in the absence of robust data protection law.

What to do?

The most effective way could be to make it obligatory on the device manufacturers to also provide the users with sufficient information on such apps, including full disclosure on the type of data being collected, the purpose for which data will be used and the entities with which such data will be shared, if any. Also, all this information should be communicated in a language that the user can understand easily. This approach will allow consumers to make an informed choice about the apps they want to use on their phones and risks associated with the same. A comprehensive data protection and privacy law with real enforcement mechanisms would benefit Indians in more ways than one.

Data Privacy

“Data theft and tampering are emerging vices in storage and backup industry and our system would checkmate it.”

Soumitra Agarwal

Privacy in a broader sense, is the right of individuals, groups, or organizations to control who can access their information. The several universal processes that can help develop a data privacy framework are:

Discovering and classifying personal data: Determining types of data, collection of data, storage, and accessibility of data.

Conducting a Privacy Impact Assessment (PIA): Determining data storage, location, data security measures that are currently implemented, and where systems may be vulnerable to a data privacy breach. Data security measure such as change management, data loss prevention, data masking, protection of data, ethical walls, privileged user monitoring, secure audit trail archiving, sensitive data access auditing, user rights management, user tracking, and VIP data privacy.

Understanding cross border marketing and third-party marketing issues.

Analyzing compliance requirements such as legislative regulations, industry specific regulations, third party obligations. Developing privacy policies and internal controls related to data governance, data privacy, and security breaches, and data privacy training.

Time Capsule Cryptography: The Dark Archive

Imagine this scenario; 10 scientists locked in a room make a prediction for a catastrophic event very far in the future. If that information is released now, there will either be meaningless mass panic or outright scorn and disbelief. The 10 of them make a pact to release this information 50 or 100 years in the future. What’s the method?

10 sealed envelopes, each containing a copy of the prediction, passed down through generations, to be revealed at the right time.
There are several ways why this can fail. Human curiosity to open such a mysterious letter, Human greed to trade information for some monetary gain, natural disasters like fire or flood or even something basic like carelessness.


When a similar scenario of keeping secret the individual contributions to The Belfast Project till the death of the involved parties failed, Jonathan Zittrain, director of the Berkman Center for Internet and Society at Harvard University, started thinking about how to ensure that data are protected for the promised time period.


Mr Zittrain received a $35,000 grant from the Knight Foundation, an organisation dedicated to “informed and engaged communities”, to create an encrypted “time-capsule” service. Its aim is to enable a person to securely send a message, in effect, into the future—encrypted in such a way that it cannot be read by anyone until a certain date or event.


How does such a system work?


Well, there are a few options by which we can achieve what may be referred to as a “dark archive”.


One is to lock a digital version of the message behind a cryptographic puzzle that current computers are incapable of solving, but that computers ten or 20 years in the future (presumed to be far faster and cleverer) could tackle with ease. That plan, however, is fraught with uncertainty around the pace of technological progress.


Mr Zittrain’s idea is to use a “bank and trust” model instead. He intends to encrypt the data with the best technology available today, then split the key that unlocks the encryption into multiple fragments. Each fragment would be entrusted to a library or lawyer in a different jurisdiction, who would be instructed to hand it back only once the specified conditions had been met (or if forced to do so by some legal challenge).


Imagine key fragments distributed around the world to, say, ten parties, requiring the cooperation of at least six of them to reassemble the key needed to get the documents. The parties would be instructed only to announce the keys when the original owner’s specified conditions are met.

Early disclosure wouldn’t be impossible, but it would require a sustained effort that would only be worth undertaking if the access were a genuine priority, and one justifiable to the authorities of several countries who could each in turn pressure their respective keyholders. That kind of encryption is easy to do, and it can further be used to provide decent assurances that the material encrypted has not been altered in any way since it was first locked up.


Dan Wallach, a computer security expert at Rice University, in Texas, believes that Mr Zittrain has chosen the best model for his dark archive. However, he cautions that technical challenges remain, principally those around the strength of the encryption itself. The cat-and-mouse game between those who make codes and those who break them never slows, and Dr Wallach says that in order to anticipate codebreaking abilities in a distant future, “you have to over-engineer things”.


The disadvantage to time-release cryptography is that the recipient must devote an entire processor to solving the problem for that period of time and that it should remain secret. The best proposal for a Puzzle with the right properties is due to Rivest, Shamir and Wagner in the paper ‘Time lock puzzles and timed release Crypto’. It is based on repeated squaring in RSA groups. A recent result precludes any intrinsically sequential time-lock puzzles in the random oracle model (e.g., based on hashing).


Zittrain’s challenge is to build a time capsule that is flexible enough to allow early access to sensitive information as a matter of last resort, yet secure enough to protect the very disclosures that future historians will find most useful. At the moment, he fears that anyone holding information that could be of great future value, but that poses some reputational or legal risk, makes a simple choice. “They just toss it,” he says.

5G: The age of IoT

The face of Earth is being constantly changed through evolution. From Dinosaurs and smoky craters, we’ve evolved to Humans and Civilizations.

While as a species we’re not undergoing any more evolution (At least according to biologists), as progressive thinkers, we’re constantly evolving the technology before us. One such evolution is networks. Where we started with 1G, we are now stepping into 5G.


1G technology had poor coverage and sound quality. There was no roaming support between various operators and, as different systems operated on different frequency ranges, there was no compatibility between systems. Since calls were not encrypted, anyone with a radio scanner could drop in on a call.


2G was not only an evolution, it brought about a cultural revolution! For the first time, people could send text messages (SMS), picture messages, and multimedia messages (MMS) on their phones.


3G aimed to standardize the network protocol used by vendors. Users could access data from any location in the world as the ‘data packets’ that drive web connectivity were standardized. This made international roaming services a real possibility for the first time. 3G’s increased data transfer capabilities (4 times faster than 2G) also led to the rise of new services such as video conferencing, video streaming and voice over IP (such as Skype).


4G offers fast mobile web access (up to 1 gigabit per second for stationary users) which facilitates gaming services, HD videos and HQ video conferencing. The catch was that while transitioning from 2G to 3G was as simple as switching SIM cards, mobile devices needed to be specifically designed to support 4G.


In the current global scenario, 4G is the current standard around the globe. However, some regions are still plagued by network patchiness and have low 4G LTE penetration. The question is, with the evolution phase of 4G still patchy, why is the world eager to jump to 5G?


During an interview with Tech Republic, Kevin Ashton described how he coined the term “the Internet of Things” – or IoT for short. Since then, the phrase caught on and IoT was soon touted as the next big digital revolution that would see billions of connected devices seamlessly share data across the globe.

According to Ashton, a mobile phone isn’t a phone, it’s the IoT in your pocket; a number of network-connected sensors that help you accomplish everything from navigation to photography to communication and more. The IoT will see data move out of server centers and into what are known as ‘edge devices’ such as Wi-Fi-enabled appliances like fridges, washing machines, and cars.


4G networks wouldn’t be able to support such a network. As 4G’s latency of between 40ms and 60ms is too slow for real-time responses, a number of researchers started developing the next generation of mobile networks, 5G.


5G has actually been years in the making.


In 2008, NASA helped launch the Machine-to-Machine Intelligence (M2Mi) Corp to develop IoT and M2M technology, as well as the 5G technology needed to support it.


In the same year, South Korea developed a 5G R&D program, while New York University founded the 5G-focused NYU WIRELESS in 2012.


5G runs on the same radio frequencies that are currently being used for your smartphone, on Wi-Fi networks and in satellite communications, but it enables technology to go a lot further.

Imagine billions of connected devices gathering and sharing information in real time to reduce road accidents; or life-saving applications that can take flight thanks to lag-free guaranteed connections; or production lines so predictive they can prevent interruptions well before they occur.


While 5G phones are already available in India from Realme and iQoo, both Chinese-owned, the 5G spectrum and the networks are going to take time.

India is set to join the 5G revolution soon with the Telecom Regulatory Authority of India (TRAI) preparing to open the spectrum in 2020. Telecom operators in the country are getting ready for 5G trials. The country needs to explore all three spectrum band categories (millimeter wave, mid-band and sub-6 airwave) in order to realise the full benefits of 5G. If India does not want to be left behind, which has happened in all the previous ‘G’ transitions, it is crucial to roll-out 5G and play a pivotal role in defining what the world will use — to grow technologically and economically.

One Reality to rule them all

When you’re playing Tennis on your Wii gaming, in a computer-simulated environment, with your motor actions reflected on a screen by your character, you are experiencing Virtual Reality or VR.


When you run around a physical space to reach and collect pokemon from a virtual realm that occupies the same physical space as you, but exists only on your mobile phone, like Pokemon Go, you are experiencing Augmented Reality or AR.


In a Mixed Reality (MR) experience, which combines elements of both AR and VR, real-world and digital objects interact. Mixed reality technology is just now starting to take off with Microsoft’s HoloLens, where developers to run apps, use his or her phone or PC’s keyboard to type text, view a live stream from the user’s point of view, and remotely capture mixed reality photos and videos.


Combining these three alternate reality concepts, we get a convergent technology, Extended Reality or XR. Extended reality (XR), an umbrella term used to describe immersive technologies that can merge the physical and virtual worlds by blending AR, VR , MR and everything in between.


Extended Reality is an idea that’s been around for a long time, though primarily in science fiction. Stanley G. Weinbaum may have been the first to envision it back in 1935, when he wrote a story, “Pygmalion’s Spectacles,” in which a professor invents a pair of goggles that allow moviegoers to taste, smell and touch imaginary things, talk to fictional characters and immerse themselves in a story that happens around them, instead of on a screen.


In 1962, a cinematographer named Morton Heilig patented Sensorama, in which a person sat in a semi-enclosed cabinet and experienced a stereoscopic 3-D display, augmented by a fan that spread aromas and a vibrating chair to simulate movement.


In the late 1970s, Massachusetts Institute of Technology researchers developed an early VR mapping simulation that allowed users to move through the streets of Aspen, Colo. In the early 1990s, Boeing researchers developed the first AR application, which guided aircraft assembly workers on how to install wiring. Since then, XR devices have grown increasingly miniaturized — and become wearable.


While Extended Reality is still in its early phase, it’s already growing explosively, so that by 2022, sales of XR technology could surpass $200 billion. Telecommunications researchers predict that the advent of 5G wireless networks, which will make it possible to transmit vast amounts of data more quickly, will help make XR even more powerful and sophisticated.


Looking at some applications of XR in the life of an average human being,

Remote work: Workers can connect to the home office or with professionals located around the world in a way that makes both sides feel like they are in the same room.


Retail: XR gives customers the ability to try before they buy. Watch manufacturer Rolex has an AR app that allows you to try on watches on your actual wrist, and furniture company IKEA gives customers the ability to place furniture items into their home via their smartphone.


Real estate: Finding buyers or tenants might be easier if individuals can “walk through” spaces to decide if they want it even when they are in some other location.


While we consider the advantages of XR, especially in the covid-19 aftermath, developing the technology and its implementation on an everyday basis are two very different challenges.


First, XR technologies collect and process huge amounts of very detailed and personal data about what you do, what you look at, and even your emotions at any given time, which have to be protected.


Secondly, the cost of implementing the technology needs to come down; otherwise, many companies will be unable to invest in it.


Finally, it is essential that the wearable devices that allow a full XR experience are fashionable and comfortable as well as always connected, intelligent, and immersive.


There are significant technical and hardware issues to solve that include but are not limited to the display, power and thermal, motion tracking, connectivity and common illumination—where virtual objects in a real world are indistinguishable from real objects especially as lighting shifts.

Intelligent Machinery

In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. This is Turing’s stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing’s conception is now known simply as the universal Turing machine.

“What we want is a machine that can learn from experience, and the possibility of letting the machine alter its own instructions provides the mechanism for this.”

In 1948 he introduced many of the central concepts of AI in a report entitled “Intelligent Machinery.”
One of Turing’s original ideas was to train a network of artificial neurons to perform specific tasks which was later accomplished by Belmont Farley and Wesley Clark of MIT. They ran the first artificial neural network, although limited by computer memory to no more than 128 neurons. The main concept was to understand how the human brain works at the neural level and, in particular, how people learn and remember.

“What we thought we were doing (and I think we succeeded fairly well) was treating the brain as a Turing machine.”

Turing illustrated his ideas on machine intelligence by reference to chess—a useful source of challenging and clearly defined problems against which proposed methods for problem solving could be tested.

Although Turing experimented with designing chess programs, he had to content himself with theory in the absence of a computer to run his chess program. Later, in 1997, Deep Blue, a chess computer built by IBM, beat the reigning world champion, Garry Kasparov, in a six-game match.

While Turing’s prediction that computers would one day play very good chess came true, his expectation that chess programming would contribute to the understanding of how human beings think did not.

In 1950 Turing sidestepped the traditional debate concerning the definition of intelligence, introducing a practical test for computer intelligence that is now known simply as the Turing test.

The Turing test involves three participants: a computer, a human interrogator, and a human foil. The interrogator attempts to determine, by asking questions of the other two participants, which is the computer. All communication is via keyboard and display screen.

The interrogator may ask questions as penetrating and wide-ranging as he or she likes, and the computer is permitted to do everything possible to force a wrong identification.

A number of different people play the roles of interrogator and foil, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then (according to proponents of Turing’s test) the computer is considered an intelligent, thinking entity.

However, no AI program has come close to passing an undiluted Turing test.

Morpheus

“Today’s approach of eliminating security bugs one by one is a losing game,” said Todd Austin, U-M professor of computer science and engineering and a developer of the system.

People are constantly writing code, and as long as there is new code, there will be new bugs and security vulnerabilities. To counterattack this problem head on,University of Michigan developed a new computer processor architecture where computers proactively defend against threats, rendering the current electronic security model of bugs and patches obsolete.

“Imagine trying to solve a Rubik’s Cube that rearranges itself every time you blink,” Austin said. “That’s what hackers are up against with MORPHEUS. It makes the computer an unsolvable puzzle.”

Today’s cyberattacks typically use malware to misuse basic programming possibilities such as permissions and code injection, or to manipulate unusual states, for example memory buffer overruns (a ‘control-flow’ attack) and information leakage.

This looks like an unavoidable software problem that exploits programming possibilities, which is how today’s industry treats them when they expose and patch vulnerabilities – essentially a way of rewriting code so that an error state is no longer possible.

It’s a neverending job because new code keeps getting added, which adds new vulnerabilities, requiring new patches.

Backed by the famous US Defense Advanced Research Projects Agency (DARPA), Morpheus sets out to counter weaknesses in today’s microprocessors, which the researchers believe make vulnerabilities and their exploits impossible to defend against. Morpheus encrypts and randomises or ‘churns’ data every 50ms – faster than any attacker can locate it – in effect making many common vulnerabilities impossible to exploit.

This ‘moving target’ defence wouldn’t make computers unhackable – Morpheus doesn’t address every type of attack – but it would at least greatly reduce the attack surface.

Quantum Computing

What is Quantum Computing?

Computing systems are primarily created to store and manipulate information. The classical computers we’re used to, manipulate individual bits which store information in the form of binary ‘0’ and ‘1’.

Quantum computers use quantum mechanical phenomena to manipulate information in form of “Quantum bits” or “Qubits”.

REAL WORLD APPLICATIONS

IBM Q has undertaken the first industry initiative to build universal quantum computers for finance, engineering and science. Part of this initiative involves advancing the quantum computing technology stack and exploring applications to make it universally usable and accessible.

Currently, several IBM quantum devices are available to the public through their quantum cloud services. Users can access simulators and quantum devices for free through IBM Q Experience or Qiskit.

Some of the major real world application of this technology are as follows:

Traffic optimization by Volkswagen:

  • Traffic optimization is the methods by which time stopped in road traffic is reduced.
  • Volkswagen and D-Wave announced a collaboration to find a solution to traffic optimization using quantum computing.
  • After working together for three months to figure out how to map the data to quantum machinery, the team was able to generate solutions in seconds.
  • When attempted on regular (a.k.a. classical computing system) Volkswagen server, the same solution took up to half an hour to be generated.
  • The Volkswagen software engineers were even able to use D-Wave quantum machinery through the cloud instead of on-premise hardware.
  • Volkswagen has continued its quantum investments since the project and announced a partnership with Google for further development in traffic optimization using quantum computing.

Election modelling

  • Henderson, a senior data scientist at QxBranch, an engineering and analytics company specializing in quantum computing applications, applied quantum computing trained models to simulate election results.
  • Using historical election results, state result probabilities based off of polling and publicly available data from statistical analysis site FiveThiryEight, Henderson and his colleagues mapped the election to a Boltzman machine(neural network), which was then mapped to a quantum system.
  • Each iteration of the training model produced 25,000 solutions which demonstrated greater uncertainty and less certainty of a Clinton win, and pinpointed most of the “tipping-point” states, which the classical systems failed to generate.

Marketing and advertisement

  • Recruit, a Japan-based marketing and communications corporation, has begun quantum computing for marketing and advertising applications.
  • Recruit used quantum technology to place advertisements on mobile platforms and found it to be more effective than the previously used machine learning algorithms.
  • Recruit is currently trying to improve upon machine learning methods through quantum computing, to improve recommendation systems.

Denso and Toyota

  • In December, DENSO Corporation and Toyota partnered to apply quantum computing to factory and traffic optimization as well as autonomous driving.
  • The companies used cloud-based quantum systems to analyze the information and improve efficiency, which included working on traffic decongestion and emergency vehicle route optimization.
  • At the Consumer Electronics Show, DENSO showed that quantum technology allowed the companies to immediately carry out calculations on a larger system of data and calculate optimal routes for more vehicles in real time
  • Traditional systems could only manage individual optimization.

Cryptography & IT security

  • IT security primarily depends upon encryption and public key cryptography, which are based on complex, difficult to cipher mathematical algorithms.
  • Even by using enormous amounts of computing power, modern algorithms with suitable key length, such as AES-128, RSA-2048, ECDSA-256, would take up centuries or even time longer than lifetime of the universe, to break
  • However, using quantum computing, unique algorithms can be generated, like Shor’s algorithm, that can break encryptions in a significantly shorter time span.
  • While symmetric algorithms such as AES-256 will be harder to break, unsymmetric algorithms such as RSA and ECDSA will be rendered useless.

Schrodinger’s equation

  • Researchers from Osaka City University (OCU) in Japan have discovered a quantum algorithm that would enable users to perform full configuration interaction ( Full Cl) calculations suitable for chemical reactions exponential/combinatorial explosion.
  • This gives the exact numerical solutions of Schrodinger’s Equation, which are intractable problems for the current supercomputers.
  • Such a quantum algorithm contributes to the acceleration of implementing practical quantum computers.
  • A paper on Full Cl approach implemented on quantum computers have been published in ACS Central Science.

Advantage of quantum computing over classical computing

Subatomic particles have a unique property of being able to exist in two different states simultaneously.

Quantum computing exploits this property to carry out operations much faster and utilizing much less energy than classical computing systems.

In classical computing systems, bit is a single piece of information that can exist in two states – ‘1’ or ‘0’.

However, in quantum computing system, a single qubit can store more information than just ‘1’ or ‘0’ because they can exist in any superposition of these values.

“The difference between classical bits and qubits is that we can also prepare qubits in a quantum superposition of 0 and 1 and create nontrivial correlated states of a number of qubits, so-called ‘entangled states’ “
-stated by Alexey Fedorov, physicist at the Moscow Institute of Physics and Technology.

Future of quantum computing

The basic principle of quantum computation is that the quantum properties can be used to represent and structure data, and that quantum mechanisms can be devised and built to perform operations with this data.
While small scale quantum computers have been developed, we are still years behind building a large scale quantum computer.

Google, IBM and D-Wave Systems are currently working on different approaches to build this new generation of super computers

If large-scale quantum computers can be built, they will be able to solve certain problems exponentially faster than any of the current classical computers.

Research in both theoretical and practical areas continues at a frantic pace, and many national government and military funding agencies support quantum computing research to develop quantum computers for both civilian and national security purposes.