How people work has changed significantly in the last five, ten, twenty years. Furthermore, there was a time when people would go out to a field to farm or go to a physical factory for wages. However, with the invention of the microprocessor and the rapid speed in which that has advanced application capability, so has the workplace and workforce talent evolved. Indeed, many industries and career choices have evolved, grown, and even disappeared.
The world population is currently sitting at 7.7 billion, currently increasing at a rate of 77 million a year, and projects to be at 8.5 billion by 2030. As this occurred, it has become increasingly more expensive and difficult to live in metropolitan areas. The internet has allowed many organizations to drastically change how they operate, including completely operating remotely; some choosing to not even have a physical location.
Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) has also allowed entrepreneurs and startups to start and scale rapidly, with minimal funding. Traditional IT infrastructure required significant capital investment for hardware, space, networking, engineers, and support.
Software as a Service (SaaS) has allowed organizations to deliberately outsource portions of their operations to a variety of 3rd party services. This could include, for example, housing entire Human Resources and Finance IT systems through such a venue. The concept and definition of “on-premise” in an either partial or completely remote organization has very fluid meaning.
Every endpoint, whether physical or virtual, owned, self-hosted, or 3rd-party hosted is a point of ingress and egress or, very simply, “the attack surface”.
There are some situations where a completely or partially remote organization needs to run in-house or on-premise enterprise applications. These could be internally built products, source control, CI/CD, artifact repositories, financial systems, and so on. Every endpoint, whether physical or virtual, owned, self-hosted, or 3rd-party hosted is a point of ingress and egress or, very simply, “the attack surface”.
The traditional solution for accessing these applications is through some kind of virtual private network or VPN. A VPN typically presents a user with an application for connecting to a public IP space on the internet. Once connected, that device does network address translation, or NAT, to a scope of private subnets. A parallel can be drawn here over what a VPN is doing. You could think of a VPN as a gate to a castle over a moat. The VPN is giving you access inside the walls; therefore, explicit trust is given to all things inside the castle once you pass through a single gate.
While there are solutions to handle micro-segmenting such a network, they, by virtue of using network routes, are limited. They are limited because there isn’t an explicit control of the service being protected.
Another approach that has been used is network whitelisting. This approach is even less safe in that it requires some other network to be trusted. This is typically impossible to rely on as a measure in a world of a distributed workforce.
Let’s explore three types of approaches to using a request-based authentication. This is commonly referred to as a component of zero-trust architecture. Two of these methods directly address the endpoint protection of remote workers working with on-premise systems. One of the foundational components to introduce is identity. Identity is the idea that a set of credentials uniquely identifies a human. While the identity concept can be extended to services, let’s table that for now.
There are five-factors of authentication. The more you have in-place, the more can you be assured that the person authenticating is who they say they are.
One of the best approaches for managing identity is a single-sign-on system. This allows for managing a set of credentials from a single source of truth. This also assists with a number of other concepts including, but not limited to, a single point of provisioning, auditing, multi-factor authentication, device certificate management, and least-privilege access. Managing identity should heavily rely on the use of multi-factor authentication. Password rules for the most part, unless dealing with a very experienced user is far too limiting as a sole identity control.
There are five-factors of authentication. The more you have in place, the more can you be assured that the person authenticating is who they say they are. One is something you know, like a password or pin. Two is something you have, usually some kind of time-based token, a hardware key or smart card. Three is something you are, this could be thought of as biometric validation, fingerprints, retina, or facial recognition for example. Four is somewhere you are. This is a little harder to validate, however, you could use this as part of developing a trust score for allowing a connection. This means if the trust score is low enough for a combined set of behavioral activities, the machine will block access until a human operator can validate the activity. The fifth and last is something you do. This could be gesturing to a pattern or picture that is unique to that person.
With the identity component in place, we can move on to how to protect services. One way to accomplish this is through the use of an application reverse proxy or load balancer. Before the proxy authorizes a connection, it will check for the existence of a cookie. If that cookie is not present, it will request authentication to the single sign-on. If authentication is successful, and the identity has been granted access to that service ahead of time, they will be distributed a cookie. Every subsequent request will be checked and authorized. This means that it’s not a one-time check to allow access to a network, but rather each individual HTTPS request is checked.
Web browsers natively support cookies, so getting other network traffic to be authenticated through a reverse proxy has to be handled another way. Using an identity aware proxy, you can pre-establish a connection and then allow other requests to proxy through it. Essentially the reverse proxy becomes like a web-browser, attaching the secure cookie to every subsequent request.
The third approach is for service-to-service traffic. There has been a sharp rise in the popularity of software defined network appliances that augment traffic at Layer 3 (Network) and 4 (Transport). This requires inserting additional encrypted information that only each network endpoint knows how to interpret. The interpretation of each packet ensures that no other traffic can arrive to an unintended service. Thus, once again, we are not relying on network routing, but instead on request-based authentication.
Given that the workforce will become more distributed as years go on, and networks will become more fluid and mesh-based, request-based authentication will become more important.
–Reprinted from US Cybersecurity Magazine.