Blog

A Beginner’s Guide to Network Security Management

  • digital infrastructure
  • digital workplace

Introduction

Information is now a highly prized asset. There is a growing plethora of security threats and attacks on networks that mainly aim to pilfer data, especially financial information and passwords. Hence the need to protect the data both in storage and in transmission is paramount. Network security and management in ICT is the process of maintaining the integrity of a system, its data, service, and infrastructure. It involves the assembling, deployment, integration, coordination, and securing of network devices and its components. The basic goal of any computer network is to ensure quality of service at reasonable cost through real-time service availability at a reasonable cost. Security management ensures that the security policies put in place to safeguard the network from attacks are operational, up to date, and enforce compliance with the security policy in other to counteract potential threats, ensure reliability, and boost confidence in the system.

Security and management are complementary in practice; they need each other to be able to function effectively. A management program needs policy, implementation, and security assurance measures for it to be effective. Without an enforced security policy in place, service users would begin to use the network in any way they deem fit, thereby creating chaos in the system. The security policies of an organization determine the users’ responsible use of the system. Therefore, the goal of network security is to apply and enforce consistent security policies across organization’s physical boundaries and throughout the network.

Five Pillars of Network Management

  1. Performance Management: This is meant to optimize the quality of network service, measure the amount of resources consumed, and seamlessly coordinate the operations of different network components.
  2. Failure Management: The main objective is to detect, respond to faults in the network, and log reports of the event. It also tries to manage bugs during operations and ensure continuous service during attacks, and it initiates recovery after network failure.
  3. Configuration Management: This is used to logically link the various devices on the network together. Without configuration, it is impossible to connect a device to the network. It also enables the network admin to keep track of the devices on the network.
  4. Accounting Management: This involves everything that has to do with network service operating bills such as usage charges, resources’ inflow and outflow statistics, etc.
  5. Security Management: This is meant to set, control, and enforce access to various network resources based on the organization’s security policy.

Security Policy

Security policies are the rules and regulations that an organization puts in place to secure and protect its organizational resources. These policies should be adequately researched, documented, implemented, evaluated, and reviewed periodically to ensure a properly managed and secured network. An organization must first break down its security requirements into sections before formulating security policies. Some of the sections include the following:

  1. Program Policies:

    These cover the organization’s overall network objectives and goals. They are applied to all ICT resources within the organization. For example, program policies may address service availability or downtime contingency measures. Program policies must be in tune with existing organizational rules.

  2. System Policies:

    These policies are meant to be applied to organization’s system or ICT infrastructures only. Multiple system policies may be formulated to cover specific systems only to control access to the system and regulate its usage.

  3. Issue-Specific Polices:

    These addresses particular issues, such as Internet access, sending/receiving e-mail attachments, out of bounds, etc.

The following guidelines may be considered when formulating security policies:

– Purpose statement:

This explains the organization’s mission statement, the main goals of the policy, identifies the terms and conditions set for specific services, and so on. It should also provide justification for the policy. The overall purpose statement should strike an acceptable balance between security and productivity.

– Scope of the policy:

This states which network resources are covered by the policy, such as hardware, software, data, personnel, etc. It also identifies general areas of risk associated with the network service.

– Application procedure:

This defines the situations in which a specific policy is applicable. It generally defines how to address the risk and certain actions required to address certain risks. Simply put, it answers the questions about what, where, when, and whom of the policy.

– Compliance:

Describes how the policies will be deployed and enforced, provides the basis for verifying compliance through audits, outlines implementation and enforcement plans, and stipulates the disciplinary penalties for breaching the any rule.

– Miscellaneous:

Provides other relevant information, stipulates procedures and point of contact for reporting incidents, and defines who is responsible for reviewing the procedures.

Implementing Security Policies

Before implementing network security policies, the organization’s management should orientate the employees by developing descriptive security documentation for employees, conduct threat emergency drills, notify about the new security polices, and update employees on the progress of new security policies. Once an organization has finalized its network security policy, the policy and procedures should be documented and measures should be put in place to protect sensitive information from unauthorized disclosure or modification.

Organizations should review their security policies periodically to ensure they are up to date and continue to fulfill the institutions security needs. Each department should also review their policies and the stipulated procedures.

Risk Assessment

Risk assessment is the process of evaluating threats that may affect the network, the possible sources of the threats and how to counter or mitigate their effects. The main objective is to reduce the possibility of a threat being unleashed on critical network assets. To perform a risk assessment, the network management needs to compile possible threats and vulnerabilities.

Basic Terms

a. Threats:

A threat is anything that has the potential to disrupt the smooth operation, functioning, integrity, or availability of a system. There are different types of threats. They include natural threats such as floods and storms, unintentional threats caused by accidents or incompetence, and intentional threats that are deliberately planned to attack the system.

b. Vulnerabilities:

A vulnerability is the weak link in a system that can be exploited to gain access to a network for malicious reasons. It could be a weakness the design of a system, mis-configuration, or mis-implementation of a security solution, thereby making the system susceptible to a threat.

c. Risk:

A risk is a threat that can be launched through vulnerability.

Risk assessment should encompass the following details:

  • Identification and prioritizing assets according to importance or value.
  • Identification of vulnerabilities or weak links in the network that can be exploited.
  • Identification of threats and the probabilities of their attack of the network.
  • Evaluation of the consequences of threat attack, losses that may be incurred, and cost of neutralizing the threats.
  • Identification of countermeasures, deterrence, and delay tactics that can be deployed to neutralize potential threats to minimize the risk posed by the vulnerability.
  • Developing security policies and procedures.
  • Evaluation of the effectiveness of the countermeasure.
  • Measuring and estimating the time it will take to fully implement the proposed security countermeasures.
  • Procedure for updating the countermeasures to ensure checks put in place will not be defeated by the advancement of present threats or the emergence of new threats.
  • Determination of whether the countermeasures are to be deployed through the whole organization or specific departments only.

Common Network Attacks

a. Eavesdropping:

The interception of data by unauthorized entity is called eavesdropping. Passive eavesdropping is when the person only secretly listens to the messages being passed while, in active eavesdropping, the intruder listens and also modifies the sent messages or injects his own malicious messages something into the network. As a result, the message will be distorted.

b. Virus:

Viruses are programs that are conspicuously attached to files and dispersed into the network. They begin to replicate and activate themselves when an infected file is opened by a system.

c. Worm:

Worms are similar to virus because both are self-replicating, but the worm does not need to be attached to a file before it can be deployed. They can be dispersed through mass mailing or targeted at a specific network.

d. Trojan:

Trojans are usually attached to programs that appear to be benign the user, but will actually deploy once opened in a system. . Trojans usually carry some payload of virus or worms.

e. Unauthorized Access:

This is when the attacker mimics a legitimate network user in order to gain access to the network. For example, a web server may be tricked to grant access to a hacker thereby giving the hacker access. The main aim of this attack is usually to consume resources uselessly or interfere with the system intended function.

f. Phishing:

Phishing is an attempt to trick users to part with sensitive information, such as credit card numbers or personal data. This is usually accomplished through cloned mails sent to a specific target requesting that the user provides the information of service would be denied to them.

g. Spoofing:

Spoofing means to clone the address of a targeted user and then insert fake headers to make the message appear genuine to intrusion prevention systems in order to gain access to other computers. The identity of the intruder is hidden, making detection and prevention difficult. Spoofing can be carried out through cloned emails, IP or caller IDs.

h. Breach of Confidentiality:

This is when an attacker sneaks into a network in order to access sensitive information and divulge it to competitors or the public. A compromised user account or employee is enough to launch this attack.

i. Denial of Service:

DoS is carried out by having a clandestine program bombard a particular address with service requests beyond the server’s capacity thereby overwhelming the server. If a server is meant to handle 100 requests per second, the attacker sends 300 requests per second. Eventually, the system cannot process any more requests, rendering it out of service.

The Security Circle

There are three pivotal aspects of security. They are incident prevention, detection, and response. The security circle should be the foundation of all security policies and measures that an organization develops and deploys.


Security Circle

a. Prevention

At the top of the security circle is prevention. To provide some level of security, it is imperative to take measures to prevent the exploitation of vulnerabilities. Though it is impossible to devise a security scheme that will prevent all vulnerabilities from being exploited, companies should ensure that their preventive measures are strong enough to discourage potential attackers. Organizations should emphasize preventive measures over detection and response. It is easier, more efficient, and much more cost-effective to prevent an attack than to detect, respond, and clean up the mess that may be created.

b. Detection

When preventive measures are implemented, threat detection procedures need to be put in place to detect potential risks or security breaches, in case preventive measures fail. The sooner a problem is detected, the easier it is to remedy and clean up.

c. Response

An organization needs to develop a plan that will deploy the appropriate response to a security breach. The response methods should be able to identify the source of the threats. The plan should be documented and all the employees should be aware of what to do when faced with security challenges. Periodic drills should be conducted to ensure employees comply with laid down procedures.

Network Security Architecture

The network security architecture must contain five of the following elements:

a. Authentication:

This ensures the communicating entity is really the one it claims to be. Authentication is required when communicating over a network, signing or logging into a network. Three basic schemes are often used for authentication:

– Verification:

This authenticates identity through the use of a password, code, or sequence. The security is predicated on the idea that it is only the user who knows the secret password or code then must be the authorized user and is allowed to access the network. However, this method is not very secure as required confidential information can be divulged or stolen.

– Access Card:

This requires a card, key, or some special device to enable access the network. Security is predicated on the belief that only authorized personnel or should have access to the specific device. The disadvantage to this method is that the “card” can be lost or stolen.

– Biometrics:

This identifies an authorized user through some recorded physical or behavioral characteristic. Biometrics can authenticate one’s identity based on fingerprints, a voice print, or an iris scan. This system, if properly designed and implemented, can be extremely difficult to compromise.

b. Access Control:

This refers to the ability to control the level of access to a network or system to prevent unauthorized use of resources and how many resources a user can receive. The higher the access level granted, the more resources the user can make use of.

c. Confidentiality:

This is meant to protect data from unauthorized disclosure and only the sender and the intended receiver should know the message contents. This is usually achieved either by restricting access to the information or by encrypting the data.

d. Integrity:

This refers to the ability to protect data to ensure that the message is original and has not been altered or destroyed. Data accounts must be consistent with amount of resources allocated or consumed. Data must also be timely and complete. For example, the total bills payable by a network subscriber must be in tune with the amount of data the user consumed.

e. Non-repudiation:

This ensures that the parties involved in the network service cannot repudiate the fact that a service was provided or received when, in reality, the service was actually sent or received or that information or files were accessed or altered when the service was actually sent and received. This is important because, without it, a dishonest party can profess that they provided a service when they actually didn’t or deny receiving or using the service, hence they are not financially liable.

Network Security Mechanisms

A Security mechanism is a system designed to detect abnormal conditions, prevent threats and initiate recovery from a successful security breach. Internet threats will continue to be a major headache for network operators as long as information is accessible and transferred through various means. No single mechanism that will support all required security functions. Below are some detection and defense mechanisms available to handle most network threats.

a. Authentication

Authentication in a digital setting establishes the identity of the sender and/or the receiver of information so that the receiver of a message can be confident of the identity of the sender. Authentication becomes invalid if the identity of the sender or receiver cannot be properly established. Digital signatures may be used to confirm the identity of the parties involved.

b. Encryption

Encryption is the conversion of data into an unintelligible form so that it cannot be easily decoded or understood by unauthorized people. Decryption, on the other hand, converts encrypted data into its original form after the message is received.

c. Packet Filtering

Packet filters examine the headings of data packets according to programmed parameters and forward them to the firewall. If the headers of some packets do not meet preset criteria, they will be blocked from passing to the servers.

d. Firewalls

A firewall is an application program installed in servers and routers to control traffic into and out of a network. It works like a security guard to check and regulate everything going into and out of a network and can be implemented as a security buffer or a combo of hardware and software solutions and can be fairly simple or very complex.

e. Intrusion Detection Systems

These are sensors placed on strategic junctions of the network to detect and block suspicious traffic and report such to the network administrator. Some such systems only monitor and alert of an attack while others try to block the threat. They can be installed on a network, a host server, or an application (to monitor that particular application only).

f. Anti-Malware Software

Antivirus applications can be installed on the system to scan or sniff transmitted data for viruses, worms, and Trojan horses. They can also be used to clean up malware infection in a system.

g. Secure Socket Layer (SSL)

SSL is a protocol suite used to create a secure tunnel between a client and server or between a web browser and a server to secure data being passed across the internet. SSL  requires authentication  of clients  to  server  through  the  use  of  certificates  to  the  server  to  check identity before granting a connection.

Network Security Tips

The best way to prevent a system form being infected by malware is to avoid copying or downloading files of unknown origin.

  • Antivirus software packages should be used to scan and files to ensure that they are free of malware infection.
  • Firewalls should be used at all data transmission points.
  • Idle servers and ports should be turned off to prevent them from being exploited by threats.
  • Longer passwords with mixed numbers or symbols should be used.
  • Files should always be backed up both on and off site periodically.
  • The network should be continuously monitored for suspicious activities.
  • Passwords should be changed regularly.
  • Up-to-date incident response mechanisms should be installed.

Conclusion

Securing the modern network infrastructure and service requires a good understanding of possible threats, vulnerabilities, counter-measures, and proactive actions to contain threats in case of eventualities. Although such knowledge cannot deter all attempts at network incursion or system attack, it can help to eliminate or minimize the risks associated with certain general threats, quickly detect breaches, and significantly reduce potential damages. There should be a balance among several factors, such as network size, risk analysis, the cost of implementing a security policy, and impact on overall network service when making the security policy.

Open-Source Cloud – Viable Alternatives

  • gtir2019-312x200
  • network infratsructure
  • digital infrastructure
  • digital workplace

In a bid to reduce operating costs, new efficient computing technologies and architectures are being developed to enable multiple users to share computing resources and these shared systems are becoming more acceptable and popular with both companies and private users. Cloud computing is one such new development.

Cloud computing is a large agglomeration of virtualized computing resources that are readily available and accessible for use as required by users. With cloud computing, users can host their applications, store data, and use resources from other enterprises on a pay-per-use basis.

Cloud service users are not concerned about where their apps and data are hosted as long as they can easily access them through the Internet. That allows organizations to drastically cut down or eliminate costs required to acquire and maintain costly infrastructure needed for the provision of their services. Hence users of cloud resources rent virtualized computing resources and transfer operational risks to cloud service providers.

Cloud Computing Architecture

The cloud computing environment is made up of four layers:

  1. Hardware Layer: This consists of the cloud’s physical resources, such as servers, fixtures and fittings, electric power, network devices, etc. This is usually found in large data processing and storage centers that have thousands of giant servers in order to improve fault tolerance and ensure service continuity in case some servers break down.
  2. Infrastructure Layer: This consists of hypervisors or virtual monitor machines (VMM) such as VMware or Xen that enable the creation of virtualized services. The allocation of virtualized computing resources such as processing and storage is also an important function of hypervisors.
  3. Platform Layer: This layer is made
    up of various operating systems on which software applications are run. This layer mainly aims to lower the costs of building applications directly from scratch. Google App Engine, for example, operates in the platform layer so as to provide API support for developers to use for the development of web applications, databases, etc.
  4. Application Layer: At the top of the cloud computing architecture sits the application layer which is made up of actual cloud-based applications. These applications are not the same as conventional applications since they are designed for optimum performance and can scale according to demand.

Difference Between Cloud and Data Centers\

Cloud computing services are largely different from services provided by conventional data processing centers. The differences are highlighted below:

  • Multiple Service Providers: In a cloud services system, multiple providers can run a single large data center. Using the cloud computing architecture as an example, a service provider is only concerned with maintaining and running the attributes of that layer. For example, an infrastructure service provider can rent its service infrastructure to multiple platform layer service providers.
  • Shared Pool of Resources: As a result of the fact that the cloud environment is close-knit, the same resources can be shared by different services providers. For example, power, network equipment, storage, and hypervisors can be shared among the service providers.
  • Accessibility: Unlike data centers where the users have to physically go to the center before they can be served, cloud service users can enjoy cloud-based services from the comfort of their homes and offices as long as they have an Internet connection with good bandwidth.

Open-Source Cloud Computing Solutions

The explosive growth in cloud service usage has necessitated several solutions to meet demand. Some open-source cloud computing tools are presented in Table 1.

Solutions Service Infrastructure Main Characteristics
OpenNebula IaaS Xen hypervisor Policy-based resource

allocation

Xen Cloud Platform (XCP) IaaS only a tool for automatic Xen Maintenance of clouds
Apache VCL SaaS VMware Internet access for applications
Eucalyptus IaaS Xen hypervisor and KVM Hierarchical architecture
TPlatform PaaS MapReduce, BigTable, TFS Web-based processing and data mining
Nimbus IaaS Xen hypervisor & KVM Aims to convert clusters into IaaS Clouds
Enomaly IaaS Xen, VirtualBox and KVM Open edition focused on small clouds environments

Table 1: Open-Source Cloud Computing Solutions

OpenNebula

OpenNebula is an open-source cloud toolkit that can be used to design and build public, private, and hybrid clouds. It can also be integrated with networking and storage solutions so it can be deployed in any data center. The OpenNebula solution enables the provision of cloud services such as storage and network virtualization on shared infrastructure.

Xen Cloud Platform (XCP)

Xen hypervisor is a robust tool used for infrastructure virtualization. It provides a bridge between the service provider’s hardware and the processor’s operating system. Xen hypervisor enables a server to run many other virtual servers. Xen only provides infrastructure layer service, along with automatic configuration and maintenance of cloud infrastructure platforms. Xen solution is adopted by giant cloud service providers such as Nimbus, Eucalyptus and Amazon EC2.

Apache Virtual Computing Lab (VCL)

Apache VCL is a cloud-based open-source software as a service (SaaS) platform that provides remote access to diverse applications through the Internet. Users may make use of the system immediately or book reservation to use the system later.

Apache VCL architecture is made up of the following parts:

• The Web Server: The server acts as the VCL portal. It serves as UI and manager to direct and manage users’ consumption of VCL resources.

• The Management Nodes: This is the VCL processing system. It controls the servers, storage rack, and virtual machines, depending on how it is directed. It is also responsible for processing users’ reservations or other jobs as assigned by the VCL web server. The management node ensures that VCLD service is always available to users.

• The Database Server: This uses Linux-based SQL code to store data about VCL’s resources inventory, reservations, users’ data, and access controls.

Eucalyptus

This is an open-source cloud service that is mainly concerned with academic research and, as such, it provides important resources needed for scientific and technological experiments and study. It supports virtual machines that run Xen hypervisor. Eucalyptus users can start, pause, control, and stop the virtual machines during experiments.

Just like the cloud architecture, Eucalyptus architecture consists of four layers. However, with Eucalyptus, each layer can be implemented as an independent service.

1. Node Controller: This controller runs on every node on which VMs are hosted. It controls and queries the OS and hypervisors, collect basic information about the node’s available resources, such as free disk space, and also checks the condition of the VM resources

2. Cluster Controller: This controller runs on any computer that has network connectivity to two or more nodes in order to effectively monitor, control, collect data, and report about the state of the nodes.\

3. Storage Controller: This controller is attached to storage devices and regulates the storage and retrieval of system and user data.

4. Cloud Controller: This controller is the gateway into the Eucalyptus cloud in order to manage users’ usage of virtualized resources. It provides as a web service, the UI, data, and other demanded cloud services for the users.

The characteristic that distinguishes it from other cloud services is that Eucalyptus is designed from scratch to be simple. It does not require dedicated resources. It encourages the integration of third-party extensions through a modular software framework. It also provides a virtual network layer that isolates the traffic of different service users in such a way that clusters of users seems to be from the same local network.

Google TPlatform

Google Cloud Technologies’ TPlatform is an open-source cloud solution that serves as a development platform for web development, data mining and data processing applications to be dynamically run as a platform as a service (PaaS) solution.

TPlatform cloud service is driven by the MapReduce processing model, BigTable data storage system, and a scalable Tianwang file system (TFS) similar to the Google file system. It also provides infrastructure service used for application development and data processing.

Nimbus

Nimbus is an Apache-licensed open-source solution used to turn clusters of infrastructure into an IaaS (infrastructure as a service) service for cloud computing. This gives users the opportunity to make use of remote resources by deploying a type of virtual machines, dubbed virtual workspace service (VWS) that has different frontends that users can utilize.

Enomaly Elastic Computing Platform

Enomaly ECP is an open-source cloud computing solution run by Enomaly Inc. Its main objective is to provide virtual machine administration in small cloud environments. Compared with the Enomaly commercial edition, Enomaly open-source edition is sometimes constrained by limited scalability, no support for accounting and metering, no capacity control issues, etc.

Challenges of Open-Source Clouds

The development and operation of open-source cloud computing systems comes with several technical and policy challenges to cloud service providers. Some of these challenges, which include decision, operation, standardization, and negotiation, will be discussed in detail below.

  • Decision: The basic goal of any business is to provide services that consumers need and are ready to pay for. The cloud service provider is required to have in-stock applications that will maximize utilization of cloud resources. This goes way beyond the actual programming code. Other decisions on how to implement virtualization and other links of communication also have to be considered. The cloud service provider has to determine and build the best technological resources needed to run these virtual utilities.
  • Operation: The cloud is made up of many different machines, such as processing servers, routers, switches, and storage servers. Due to heterogeneity of device manufacturers, coupling all these components together to work harmoniously is quite a challenge when designing any cloud solution. Cloud systems are dependent on web services to provide communication among the users, processing servers, and storage nodes. Efficient communication between the systems is therefore tied to the reliability of the network and available bandwidth. To constantly configure the nodes to work cohesively to ensure availability, virtualization, and scalability of the whole system is no easy task.
  • Standardization: Another considerable challenge associated with cloud computing in general is the need to standardize the proprietary interfaces used to access different cloud service providers’ services. Users who intend to transfer their data and applications to other clouds cannot find it hard to move on. There are ongoing efforts to make cloud service providers liaise together to offer a standardized application program interface (API) based on boundless open standards.
  • Negotiation: This involves negotiations on how APIs will be built, implemented, and maintained between the cloud service providers and application developers. Depending on the programmability level and service niche offered by the cloud, the API can be implemented in various ways, ranging from control of virtual machines to a web-based toolkit in the cloud used to develop applications in the cloud. In addition to these basic functions, a cloud service provider must also permit developers to add and update or replace their apps alongside other functionalities, such as load balancing and backup options. All of these are constrained by geographical, security, and copyright restrictions.

Conclusion

There are various cloud computing solutions serving different niches providing PaaS or SaaS services. Each solution represents a different view of cloud resources provision, utilization, and control. More research and efforts are currently being made by open-source cloud service providers such as Xen and Eucalyptus to solve the challenges associated with cloud computing and introduce better innovations that will positively impact the business and operations process.