Author Archives: Moses Gbadebo

Identity and Access Management (IAM)

The ability of an organization to rapidly search, identify and verify who is accessing the systems is a critical aspect of meeting security and compliance requirements for the organization.

An Identity and Access Management (IAM) solutions tool is often deployed in order to achieve these goals.

In its simplest form, IAM ensures the right people get access to the right resources at the right times for the right reasons.

Technology is only one the components of IAM. Both processes and supporting tools are critical elements of an efficient IAM strategy.

I will concentrate on the technology aspect of IAM. In particular I will focus on the Single Sign-On piece in this blog . Future blogs will attempt to look at other IAM technologies.

Broadly, IAM comprises the following technology components:

  • Authentication: Traditional way of authentication is by means of username and password. There are products that provide methods that are stronger than passwords.
  • Authorization: Grants and enforces access
  • Enterprise Single Sign-On: Enable users to authenticate once and then be subsequently and automatically authenticated to other target systems.
  • Federated Identity Management: Enables identity information to be shared among several and across trust domains.
  • User Provisioning: Includes creating, modifying and deleting user accounts and privileges.
  • Web Access Management: offers all of the above for Web-applications.

Enterprise Single Sign-On (ESSO)

Let us for a second imagine a home that comprises of at least 15 rooms (mine is much less) and each room is always locked with a set of keys. Including the main entrance, there will be at least sixteen different keys required to gain access to all of the rooms. The more rooms one needs access to, the more keys one would need to carry.

Life will be much easier for the home owner and anyone that requires access to multiple rooms if there was a master key that can open all the doors (that one have permission to).

Take this analogy and apply it to the IT network;

  • House = IT network
  • Rooms = Applications on the network
  • Person = Username
  • The Key(s) = Password

To gain access to any IT network, one generally requires a username and password. The system combines the username and password to represent the identity of the person requesting access to the network.

Gaining access to the network does not necessarily mean that one have access to all the applications on the network. For example access to the HR applications will be restricted to only the HR personnel and this will usually mean another set of username and password.

The more applications you have, the more username and password to manage. Managing a distributed security issues associated with duplicate identity stores is a nightmare for both the end users and IT administrators.

The concept of a master key on the IT network, known as Single Sign-On, is one way of addressing the issue of multiple usernames and passwords.

Single Sign-On (SSO), sometimes called Enterprise Single Sign-On (ESSO) enables users to access all their applications with a single password.

Originally, SSO was to be achieved by developing all applications and tools to use a common security infrastructure with a common format for authentication information.

Creating a common enterprise security infrastructure to replace a heterogeneous infrastructure is without question the best technical approach. However, the task of changing all existing applications to use a common security infrastructure is very difficult.  In addition there is a lack of consensus on a common security infrastructure.

SSO solution as we have it today is implemented more like a proxy; you have the SSO application usually placed between the resource to be accessed and the user (identity) who needs to access the resource.

All applications that use the SSO as a proxy, will have given the SSO application “authorisation” to check users’ credentials on their behalf.  The SSO application will also have a record of all the different permissions and access levels of every authenticated user.

Some Benefits of SSO

For end users

  • Only one password to remember and update, and one set of password rules.

For (IT) operations

  • A single common registry (directory) of user information.
  • A single common way to manage user information.

Security advantages

  • Easier to manage and protect common registry.
  • Easier to verify user security information and update when necessary rather than tracking down all operational systems. This is particularly valuable when users move to new roles with different access levels.
  • Common enterprise-wide password and security policies.
  • Users less likely to write down passwords since they only have to remember one.

The key to a successful implementation of SSO is planning. It is crucial that organisation choose the right solution; one that will scale and seamlessly integrate with the other IAM components.

With the ever growing list of security and compliance rules and regulations, the adoption of IAM technology amongst organization of various sizes will continue to grow.

The Firewall Journey

My eight year old daughter asked me what a firewall was the other day. I had to think carefully about my answer. I wanted to explain it to her in such a way that she is not left confused even more. I told her that a firewall is something that helps protect the computer from the bad stuff and that the firewall is clever enough to distinguish the good stuff from the bad stuff and will only allow the good stuff in while keeping the bad stuff out.  

I am not sure if I succeeded in my explanation in the end.

My answer got me thinking about the firewall and how relevant and effective they are in actually keeping the bad stuff out.

A firewall at its most basic level, controls traffic flow between a trusted network (a corporate LAN) and an untrusted network (the internet). Majority of the firewall deployed today are port based; they use source/destination IP address and TCP/UDP port information to determine whether or not a packet should be allowed to pass between networks.

For the port based firewall to be effective, applications need to use the ports that they are expected to use. For example the firewall would expect E-mail application to use port 25, FTP to use port 21 and web to use port 80. There are “well known” ports that have been assigned to applications and the static port based firewall expects all applications to stick to this rule.

Port based firewall relies on the convention that a given port corresponds to a given service/application. In other words, they relied on the simple equation that:

Ports + Protocol = Application

For Example:

 Port 25 + TCP = Email

They struggle to distinguish between different applications that use the same port.

In order for the firewall to continue to have relevance in protecting the network, it needs to be “more intelligent”; it needs to be able to do what the traditional firewall do today and much more.

Firewall need to evolve to be more proactive in blocking new threats. Enterprises need to update their network firewall and intrusion prevention capabilities to protect business systems as attacks get more sophisticated.

In the research note “Defining the Next-Generation Firewall,” Gartner states that “Changing business processes, the technology that enterprises deploy, and threats are driving new requirements for network security”.  Gartner warns that “To meet these challenges, firewalls need to evolve into what Gartner has been calling ‘next-generation firewalls.'”

There are several attributes that the “The Next Generation Firewall – NGF” needs to have, they include:

  • Ability to identify applications regardless of port or protocol
  • Ability to identify users and not just IP address
  • Ability to cope under heavy traffic (multi-gigs) without any performance issues
  • Ability to use information from other sources outside the firewall to make blocking decisions

The NGFs should be able to distinguish between Skype and Facebook; it should be able to tell who is (and not an IP address) on YouTube and be able to support heavy traffic. A NGF should be able to use information from a directory service (e.g. Microsoft Active Directory) to tie blocking to user identity.

The leading firewall vendors have recognised the challenges of the traditional firewalls and several products have been released.

Cloud computing, Consumerization, Compliance and the Mobile workforce is set to continue to rise and this will only add to the Security pressure on the network.  

I have since had another “firewall” conversation with my daughter. This time I was explaining to her what a next generation firewall is and surprisingly, it made more sense to her this time. Now every time she cannot access a website, she blames the firewall!

The World is Flat – And the Data Center Network?

Thomas Friedman in his book “The World is Flat: A brief History of the Twenty-First Century” analyzes how the world (in terms of commerce) became a level playing field as a result of globalization.

Is the Data center Network becoming flat?

Businesses reliance on IT to achieve more with less has never been greater. Flexibility and scalability of a fully virtualized or cloud data center will play a key role for the IT organization in their quest to keep up with the demand placed on them by the CXO.

Achieving a fully scaled out dynamic virtual data center (where applications and virtual servers can move seamlessly to other hosts) and a converged network (where all data center traffic, be it storage, messaging, or voice move onto a single network) is not possible with the current multi-tiered network.

The data centre network is the critical enabler of all services delivered from the data centre.  Many data centre networks in operation today were designed and architected to support a multi-tier network.

These setups were designed for traffic patterns that predate virtualization. They are not optimal for today’s brave new world of server consolidation, virtual machines, and cloud computing and 10 Gigabit switches.

The multi-tier network was created as a work around for the limitations of Spanning Tree Protocol (STP).

The main goal of STP was to give us a loop-free network. To achieve this, STP makes sure that there is only a single active path to each network device.  STP did manage to achieve its goals, but not without introducing limitations. Some of these limitations (listed below) contribute to the road blocks that needs to be addressed in order to achieve a fully scaled out and dynamic data centre.

  • Wasted bandwidth – by blocking some network paths in order to avoid loops, all the available bandwidth is not being used
  • Active path is not always the most cost effective – This impacts virtual machine and application portability
  • Fail over time – when a device fails, STP reconfigures the network and sets up new pathways, but it does so relatively slowly. This is not acceptable in today’s network

The workaround for STP limitations has been to keep Layer 2 networks relatively small and join them together via Layer 3 segments. – Welcome to 3-Tier Network.

Then came virtualization and unified network. It soon became obvious that the 3-Tier network is not ideally suited to support this new technology

For example, in order to do a non-disruptive VMotion, the source host and target host as well as their storage needs to be on the same Layer 2 network. In other words, live migration can only happen on a single subnet.

All of this (and host of other issues) leads to a requirement to make the data centre network more intelligent. The buzz word for this is FLATENNING THE NETWORK.

According to estimates by some analyst firm, if all businesses eliminated a single layer from their networks, they could collectively save $1billion in IT spending.

So what is the way forward and how are the vendors responding?

The way forward is to come up with technology that can address the STP issues at the same time flatten the network down to two tiers, and if possible one tier.

Transparent Interconnection of Lots of Links (TRILL) is a proposed standard from IETF that is aimed at eliminating the aggregation/distribution layer and creates a switch fabric. TRILL goal is to make the network more intelligent and eliminate all of the shortcomings of STP.

Radia Perlman (the creator of STP) is a member of the IETF working group developing TRILL.

TRILL is an emerging standard and some analysts believe that we are at least 2 years away from a matured standard-compliant implementation of technology such as TRILL. However vendors such as Brocade, Cisco, Extreme, HP/3com and Juniper have all come out with approaches that flatten the network down to two tiers, and in some cases one tier.

Westcon have over 25 years of experience in the networking business and our focus is to work with our customers and help them with the transition. The skills we have acquired over the years and the fact that we carry majority of these vendors mean we are well placed to educate and help our customers to negotiate the new world of a FLAT network.