top of page

Securing Microservices-Based Applications

TL;DR - Look at Table 1, all key aspects are captured in this table, then go to the section of interest.

Confidentiality, Identity, and Authorization (CIA) form the core of securing any application. Each of these terms can be defined as follows:

Confidentiality: Confidentiality is primarily provided by the Encryption of data. Encryption needs to be performed for both, data at rest and data in transit, as this hides the data being transferred from third parties. This further includes the aspect of data integrity to verify that the data has not been forged or tampered with.​

Identity: Identity is determined by Authenticating the user. This ensures that the parties exchanging information are who they claim to be.

Authorization: Most applications group users into different categories based on their roles. Access to applications can be restricted or denied based on the role each user has and whether the role includes permissions for a given function. The most common roles can be with the privileges of ‘User’ vs. ‘Administrator’.

There are several industry-standard security protocols and frameworks that are used in securing an application. These frameworks can quickly become jumbled into an alphabet soup, leading to confusion among developers and architects. The goal of this article is to deconstruct and explain ‘why, where, and how’ each of the key security protocols and tools is used in a Microservices based Web-application.


Section 1: Introduction


The five key areas Authentication, Authorization, Accounting, Process and Encryption (AAAPE){new Acronym} that are essential for Application security are captured in Table 1. The topic of cybersecurity is very vast and is constantly evolving, topics such as Secure Boot [2] and hardware security[3] that are closer to hardware are out of the scope of this article. The focus of this article is on the security aspect of Web-applications. In Section 2, I go over the Authentication requirements for an application and cover the two broad areas of ‘User’ and ‘API’ authentication in detail. Section 3 covers the Authorization aspects. Securing ‘Data in transit’ and ‘Data at Rest’ via encryption is covered in Section 4. Accounting and logging details are covered in Section 5. In Section 6, I go over the steps required in Secure Development Lifecycle (SDL) for an application. Finally, I concluded the article in Section 7.


Table 1 : Elements of Secure Web-Applications


Section 2: Authentication (Identity)


Security of any application starts with Authentication. This can be further classified into two categories : User Authentication and API Authentication.


User Authentication: Most web-applications require users to authenticate using a ‘username’ and ‘password’. These credentials are then verified using two different approaches - a) Locally within the application or using the machine hosting the application; or, b) Credentials are sent to an external trusted server for verification. The server can then add another layer of authentication known as Multi-Factor Authentication (MFA), most common of these being a text message sent to a user’s mobile phone with additional security code.


  • Local Authentication: When developing any application that requires some form of authentication, the easiest approach is to hardcode a default username/password combination that is built into the application. Using this step, the subsequent actions can be taken, where the ‘default’ user can add more users to the application. While this approach is the easiest, it is also the least secure as the built-in userId/Password serves as a ‘backdoor’ (even if it is not a common set of credentials such as admin/admin) into the system. Further, if ‘User Management’ gets developed into an enterprise application, the list of users can quickly get outdated as there is a constant turnover of users. Having stale accounts poses another security risk to the system, as these can be used to access the system.

  • PAM: Pluggable Authentication Module (PAM) developed in the Mid-90s by Sun Microsystem provides dynamic authentication support for applications and services. PAM can be used for managing local users on the Virtual machine hosting the application. The application can then in-turn use PAM API to authenticate the users. PAM users themselves can be managed by a central user and policy management outside and driven by Ansible or any other IT scripting automation tool.When an application makes a call to the PAM API, it loads the appropriate authentication module as determined by the configuration file, pam.conf. The request is then forwarded to the underlying authentication module (for example, Linux password or Kerberos) to perform the specified operation. The PAM layer then returns the response from the authentication module to the application.

Caveat: With most application deployments moving to a containerized environment, E.g: Docker, the PAM users configured in the containers at runtime will not be retained when the container restarts. Configuring PAM users at container build time, results in the same ‘backdoor’ as hard-coding users. One option is to move the microservice interfacing with PAM to a native VM environment.

  • Single Sign-On (SSO): Single Sign-On relies on a trusted third-party server to verify the identity of the user. The two most popular protocols supporting SSO are: OpenID Connect(OIDC) and Security Assertion Markup Language (SAML). In this section I will go over SAML in more detail. SAML is a standard protocol defined in RFC 7522, used by web browsers to enable Single Sign-On through secure tokens.


Security Assertion Markup Language: SAML is an XML-based framework that allows identity and security information to be shared across security domains. SAML is the primary mechanism used for providing cross domain single sign-on for users using a Web-browser.

SAML has three main roles defined in the communication protocol : End-User​, Service Provider and

Identity Provider (IdP)

End-User: This is a browser-based client or a client that can leverage a browser instance for authentication.

Service provider: This is the application or service that the client is trying to access.

Identity Provider (IdP) server: This is the entity that authenticates user credentials and issues the secure token.


Two other common terms used with SAML are: SAML Assertion and Metadata

SAML Assertion: This is a digitally signed XML-document (message) that contains information about authentication, associated attributes or authorization decisions about the authenticated user. This document is transferred from the IdP to the Service Provider following the clients login.

Metadata: This is an XML file generated by an SSO-enabled Service provider as well as an IdP. The exchange of SAML metadata is the first step for establishment of a trust relationship between the IdP and the service provider.


Examples: Okta, OneLogin and PingID are popular Identity Provider’s. These applications have an ‘Admin’ interface where developers can create SSO integration.

Figure 1 shows the flow for the case when a user initiates SSO by requesting to be authenticated by the IdP.



Figure 1: SAML flow - user clicks SSO button (Source [8])


API Authentication: Almost all Web-applications have API’s that can be used by developers to build various services and automation using these API’s. The Web-application itself becomes a platform for these value-added services. Authentication of API can be classified into following three types: a) HTTP Basic Authentication, b) API Key and c) oAuth Token based Authentication.

  • Basic HTTP Authentication: This authentication scheme is defined in RFC 7617, and transmits credentials as user ID/password pairs, encoded using base64. This authentication scheme is built into the HTTP protocol. The client sends HTTP requests with the Authorization header that contains the word “Basic”, followed by a space and a base64 encoded “username:password”. As the credentials are passed over cleartext this method is not considered secure, and must be paired with encryption at the transport layer.


  • API Key: API Keys are tokens that clients provide when making an API call. API keys are very similar to Basic Auth, the difference is that the API key identifies a particular application. The API producer usually allows the consumers of the API to register their application with the producer and then a key is given to the consumer. The API key can be used to identify traffic, usage patterns and logging. API keys can be easily copied and shared and need to be paired with encryption at the transport layer similar to Basic Auth. The key can be sent in the query string or as a request header.


  • oAuth 2.0: The oAuth 2.0 authorization framework described in RFC 6749, enables a third-party application to obtain limited access to an HTTP service. To understand oAuth protocol, it is important to understand the following terms : Roles, Grant Type, Access Token and Scopes in relation to oAuth framework

Roles: oAuth framework defines four Roles.

  1. Resource Owner​: End-user or Entity that can access a resource using credentials.

  2. Client Application: Application wanting access to the resources, usually access is initiated by Resource Owner.

  3. Resource Server: Server hosting the protected resources and requiring a token from clients to provide access.

  4. Authorization Server: Server issuing access token to Client Application to access resources from Resource Server.


Grant Type: Grant types are essentially access token types. oAuth RFC defines these grant types to optimize for the following use cases : a web app, a native app, a device without the ability to launch a web browser, or server-to-server applications.

The RFC defines four grant types and also a mechanism to define custom grant types for use cases not covered by these predefined grant types. The grant types defined are :

  1. Authorization code.

  2. Implicit.

  3. Resource owner password credentials, and

  4. Client credentials.

“Client credential” grant type is used for secure machine-to-machine communication and is one of the simpler forms of oAuth flow. This is the grant type typically used for inter-microservice exchange of protected resources, the application will be running an instance of the Authorization Server microservice that is used by other microservices. Figure 2, shows the flow for client credentials grant type. The Client application and the resource owner for this simpler flow is the same.



Figure 2: Client credential grant type flow (Source [16])


Access Token: The format for oAuth 2.0 Bearer tokens is described in RFC 6750. There is no defined structure for the token as per this RFC for the oAuth Token other than the token being a random string with few types of special characters allowed. However, many implementations choose to use ‘Self-Encoded Token’ such as JSON Web Token (JWT) as defined in RFC 7519. The main benefit of a self-encoding token is that the server can verify the access token without making additional lookups. The JWT token is made up of three components, separated by periods. The first part describes the signature method used. The second part contains the token data. The third part is the signature.


Scopes: Another important aspect defined by the oAuth framework is the ‘Scopes’ feature. The ‘scopes’ field is embedded in the API request and specifies the type of access needed by the API. Scopes are access control information embedded along with oAuth token in the response from the Authorization Server. Example of ‘scope’ and the response token where the scope is limited to ‘read-only’ can be as follows:


{
  "access_token":"2YotnFZFEjr1zCsicMWpAA",
  "token_type":"test",
  "expires_in":60,
  "scope":"read"  
 }

oAuth framework and related protocol defines industry standard authorization framework. There are a number of specifications and extensions that are being developed by the IETF oAuth Working Group. For readers wanting to learn more, https://oauth.net/2/ , provides a lot of details and additional references.


Hybrid User and API Authentication: oAuth 2.0 provides a comprehensive standard for securing access to API. However, it lacks user authentication. Open ID Connect (OIDC) bridges this gap by building an Identity layer on top of oAuth 2.0 Framework.

  • OpenID Connect(OIDC) : OIDC builds on two other protocols - Open ID and oAuth 2. Open ID (not to be confused with OIDC) provides only authentication and pure oAuth 2 provides authorization. OIDC is built on top of oAuth 2.0, however, it uses slightly different terminology to describe the ‘Roles’ of oAuth 2.0 entities.


Table 2: Mapping of Terms : oAuth 2.0 and OIDC


OpenID Connect is an increasingly common authentication protocol: when an app prompts to authenticate using Facebook or Google credentials, the app is probably using OIDC. The protocol further allows a range of clients, including web-based, mobile, and JavaScript clients, to request and receive information about authenticated sessions and end-users.


Figure 3 shows the flow for web applications ( Relying parties) to authenticate users with the OpenID Connect Provider (OP) Server. This server usually gets user information from an identity provider (IdP).


Figure 3: Open ID Connect (OIDC) flow for user login (Source [21])


Section 3: Authorization


Authentication determines the identity of the user and verifies who they are, Authorization determines what a user can and cannot access. The access is determined by policies and rules placed on the application based on requirements. Authorization checks are typically performed after the user has been successfully authenticated. Role-Based Access Control (RBAC) provides a robust framework for managing Authorization.


  • Role-Based Access Control (RBAC): RBAC provides an Authorization framework using the following three criteria:

    1. Permissions: These are meaningful operations in a given application, such as, read-only, read-write, etc.

    2. Roles: A given ‘Role’ is a collection of ‘Permissions’ that can be applied to Users.

    3. Users: An Authenticated user that can have one or more ‘Role’ assigned for a given application. If no Roles are assigned, then the application needs to deny access to the user.


  • SAML and LDAP/AD groups:

SAML assertion can include the group an user is associated with. The application then uses this information to apply RBAC rules. LDAP groups can be fetched by the application and synchronized on a periodic basis, so that when the user logs-in, these groups can be matched to the ‘Roles’ defined in an application to determine the ‘Permissions’ the user is assigned.


  • OIDC and oAuth 2.0 Scopes:

As discussed in the previous sections, OIDC and oAuth2.0 protocols provide access control information using ‘Scopes’. These scopes can be seen as ‘Permissions’ for the user. This can be matched against the Role the user has to check for Authorization.



Section 4: Encryption (Confidentiality)


The best strategy to protect the data from getting exposed is to encrypt the data. Encryption needs to be applied to both: a) Data in Transit, and b) Data at Rest.


  • Data in Transit : Transport Layer Security (TLS) defined in RFC 5276 and RFC 8446 provides privacy and data security for communications over the Internet i.e. for Data in Transit (IPsec RFC 6071 is another protocol that provides encryption for data in transit at Network layer, however that is out of scope in the current discussion). TLS provides:

    • Authentication: Verifies the identity of the communicating parties, with the help of asymmetric cryptography.

    • Confidentiality: Protects the data exchange from unauthorized access by encrypting it with symmetric encryption algorithms.

    • Integrity: Detects alteration of data during transmission by checking the message authentication code.


SSL, TLS History: Secure Socket Layer (SSL) was initially developed by Netscape in 1995 -  TLS 1.0 was seen as an upgrade for SSL 3.0 in 1999.  TLS v1.2 came out in 2008 and TLS v1.3 in 2018 - both remain active and form the majority usage in HTTPS, all other versions of SSL and TLS have been deprecated. The terms related to SSL are still seeing usage and causes confusion, these terms/configuration are parameters for TLS v1.2 or v1.3. 

The TLS protocol is composed of two layers: the TLS Handshake Protocol and the TLS Record Protocol. The purpose of Handshake Protocol is for authentication of the client and the server and to establish a shared secret key for next phase. The Record Protocol phase provides confidentiality and integrity by encrypting and verifying the data using the shared key established during the Handshake phase.


Deployment: Enabling TLS for any application requires digital certificate generation. These certificates are used in the authentication phase and provide a secure mechanism for key exchange. The certificates contain data that bind the public key values to the system. The format of the certificate follows the standards published by ITU-T called the X.509 format. The Public-Key Infrastructure provides confidence that the private and public keys are owned by the correct system. The Public-Key Infrastructure using X.509 (PKIX) standard is defined in detail in RFC 5280. The deployment steps for self-signed certificates include the following : (all can be performed using the ‘openssl’ executable.)

  1. Generate Certificate Authority (CA) Self-Signed Private Key and Certificate.

  2. Generate Server’s private key and certificate signing request (CSR).

  3. Sign the server’s CSR to get the server’s signed certificate.


  • Data at Rest : Almost all applications store a number of sensitive information in a database. The information can range from storing User credentials, personal information, credentials for other systems etc. This data when stored without any encryption opens up a significant source for data leaks. Encrypting the sensitive Data at Rest provides additional safeguard even if security of the system is compromised. Most applications use a software based key generating and use the software key from within the application to encrypt the data. Though this offers a level protection, it doesn’t offer protection if the source code is compromised. Hardware Security Module (HSM) is a dedicated hardware designed to protect and manage keys used for encrypting and decrypting data.

    • HSM device: HSM’s are built on top of specialized hardware and the OS has a security focus. HSM’s are usually certified for highest security standards such as Federal Information Processing Standard (FIPS) Publication 140-2. FIPS 140-2 is the US government computer security standard used to validate cryptographic modules.

    • HSM function: Generate, access, and protect cryptographic key, separate from sensitive data.

A separate microservice can be created as part of the application to interface with HSM. This service can further implement features to handle backup/restore of data with HSM generated keys and rotation of keys. In the Data Access microservice an interceptor can be added, that calls the encryption logic before writes to DB and decrypt logic after read, but before returning to the client. In addition to storing sensitive data in encrypted format, it should be shown as a masked value in the UI.



Section 5: Logging and Monitoring (Accounting / Audit log)


Logging key events, information and collecting logs are critical components of any application and is often implemented as an afterthought. Logging helps with troubleshooting, alerting, usage stats, compliance and forensics. Decisions need to be made in each application as to what information, events and access point needs to be logged and where to send these logs.


Syslog Server provides a standard logging mechanism for *nix solution. It also provides a central location where all the microservices can easily send the log messages. Syslog as a logging mechanism has been around for many decades, it was standardized in RFC 5424 in 2008. Splunk, Kibana and other tools can process this log to generate Dashboards and Alerts based on the triggers set by the administrators of the application. The transmission of the Syslog message itself needs to be secured, RFC 5245 describes the use of TLS to provide a secure connection for the transport of syslog messages.


Logging the following events can help with Security Event Management (SEM) and Security Information and Event Management (SIEM):

  1. System events - System startup/Shutdown, Network changes, Security and other setting changes.

  2. System Audit and Account information: Logon attempts, account changes, privileged access etc.

  3. System Operations: Failures, transactions with external systems and failures, configuration changes, statistics of usage of various features of application - time taken, number of operations, classification of operations.

At a minimum following logging attributes needs to be include in each log message:

  • The identity of the account.

  • Date and time zone (or UTC).

  • Service name generating the log.

  • Source IP address, port and Protocol.

  • Session duration, Identification and session cookies.

This needs to be followed by the specific log regarding the system event, audit or operation performed.


Section 6: Secure Development LifeCycle (SDL)


Secure Development Lifecycle standardizes and integrates security best practices throughout the Software Development LifeCycle. The guiding mantra is to have ‘Security-by-default’. The goal is to build software that uses policies, processes, technologies, and products that are visible and controlled.

The goals of SDL process is to:

  • Reduce vulnerability and security risk; and,

  • Faster remediation.






Figure 4: Secure Development Life-Cycle (Source [37])


The SDL process includes the following steps:

  • Security Requirements - These requirements range from following latest industry standards, best practices to following legal and industry compliance requirements such as FIPS 140-3 etc. and internal coding and review standards.

  • 3rd Party Software - Identify and register 3rd party software in a centralized database. Response to any security issues in these need to follow an established maintenance upgrade plan.

  • Secure Design - ​ Develop a Threat Model by drawing out the system architecture and adding trust boundaries and other details. Usage of STRIDE framework to validate the model. As with any design, this needs to be iterated over, as development progresses and updated so that the code developed and design are in-sync.

  • Secure Coding - Secure Coding practices may include - input validation, checks requiring authentication and authorization for protected resources, adding logs to capture key events, ensuring that logs are not leaking credentials and code-reviews to meet internal standards. The goal is to maintain runtime integrity of the system and prevent the system from getting impacted under cyberattacks.

  • Static Analysis - Using Static Analysis tools such as SonarCube, Coverity etc. can help detect potential software and security bugs in the code. Incorporating this as part of the development process and allocating time to address issues found by Static Analysis tools can help in getting the problems fixed, before the code is released.

  • Penetration testing - Running scans against the software with standard tools such as Qualys Scanner, Codemonicon etc. can help identify additional security problems. These tools often work by injecting mal-formed data, scanning for open ports, sniffing for unencrypted traffic and data, exposures during boot-up and logging etc.


Section 7: Conclusion


Security and Privacy are core aspects of all modern applications, the tools, and standards used for these needs to be continually updated to reflect the constantly changing security landscape. In this in-depth review, I went over the five key areas that need to be addressed for securing every software application: Authentication, Authorization, Accounting, Process, Encryption ( AAAPE). Applying these to any application will remove the expense and pain of changing the security architecture later. It also provides Day One advantage over the less security-savvy competition and improves overall customer satisfaction. Ultimately, having a secure and trusted application makes good business sense.


References:

  1. Microservices Security How To Secure Your Microservice Infrastructure? https://www.edureka.co/blog/microservices-security

  2. Secure Boot: https://docs.microsoft.com/en-us/windows-hardware/design/device-experiences/oem-secure-boot

  3. Hacking and Protecting IC Hardware https://ieeexplore.ieee.org/document/6800313

  4. OWASP top 10; https://owasp.org/www-project-top-ten/

  5. Pluggable Authentication Module - (PAM): OSF RFC, "Unified Login with Pluggable Authentication Modules (PAM)", October 1995

  6. Single Sign-On (SSO) - https://www.onelogin.com/learn/how-single-sign-on-works

  7. Security Assertion Markup Language (SAML) 2.0 - RFC 7522; https://tools.ietf.org/html/rfc7522

  8. Intro to SAML, Security Assertion Markup Language, SSO; https://www.secureauth.com/blog/introduction-to-saml

  9. API Authentication; https://nordicapis.com/3-common-methods-api-authentication-explained/

  10. API keys: https://cloud.google.com/endpoints/docs/openapi/when-why-api-key

  11. RFC 6746: The OAuth 2.0 Authorization Framework - https://tools.ietf.org/html/rfc6749

  12. oAuth documents - https://www.oauth.com/

  13. oAuth 2.0 Client Credentials https://developer.okta.com/blog/2018/04/02/client-creds-with-spring-boot

  14. oAuth Grant types https://developer.okta.com/blog/2018/04/10/oauth-authorization-code-grant-type

  15. Client Credential flow - https://auth0.com/docs/flows/client-credentials-flow

  16. Client Credential flow - https://docs.pivotal.io/p-identity/1-11/grant-types.html

  17. Json Web Token - https://jwt.io/introduction/

  18. oAuth Scopes: https://developer.github.com/apps/building-oauth-apps/understanding-scopes-for-oauth-apps/

  19. Open ID Connect https://fhirblog.com/2014/06/19/fhir-and-openid-connect/

  20. OpenID Connect 2 https://medium.com/@nilasini/real-world-example-to-understand-oidc-implicit-flow-ecdf1b1d0156

  21. Open ID connect https://infosec.mozilla.org/guidelines/iam/openid_connect.html

  22. Role-Based Access Control - https://auth0.com/docs/authorization/rbac

  23. AGDLP - https://en.wikipedia.org/wiki/AGDLP

  24. RBAC1 - https://en.wikipedia.org/wiki/Role-based_access_control

  25. RBAC 2 - https://csrc.nist.gov/projects/role-based-access-control/faqs

  26. RBAC 3 - https://tools.cisco.com/security/center/resources/framework_segmentation

  27. TLS vs SSL - https://www.hostingadvice.com/how-to/tls-vs-ssl/

  28. TLS , SSL - https://dev.to/techschoolguru/a-complete-overview-of-ssl-tls-and-its-cryptographic-system-36pd

  29. TLS certificate creation: https://dev.to/techschoolguru/how-to-create-sign-ssl-tls-certificates-2aai

  30. HSM - https://hubsecurity.io/what-is-a-hardware-security-module-hsm/

  31. HSM - https://en.wikipedia.org/wiki/Hardware_security_module

  32. HSM - https://www.yubico.com/products/hardware-security-module/

  33. Logging - https://cheatsheetseries.owasp.org/cheatsheets/Logging_Cheat_Sheet.html

  34. Logging - Syslog Protocol - RFC 5424 and TLS for Syslog RFC 5425

  35. Logging - https://en.wikipedia.org/wiki/Security_information_and_event_management

  36. Logging - https://security.berkeley.edu/security-audit-logging-guideline

  37. Secure SDL - https://www.cisco.com/c/dam/en_us/about/doing_business/trust-center/docs/cisco-secure-development-lifecycle.pdf

  38. Secure SDL - https://www.microsoft.com/en-us/securityengineering/sdl

  39. Secure SDL - https://dzone.com/articles/how-to-approach-security-development-lifecycle-sdl

  40. Secure SDL - http://www.cs.wm.edu/~ksun/csci680-f15/notes/25-SDL.pdf

Comments


bottom of page