Monday, September 28, 2015

Cloud Computing TERMS

Understanding Cloud Computing
Provider clouds provide increased capabilities for heavily utilized systems and networks.
Software as a Service (SaaS) includes web-based applications such as web-based email.
Infrastructure as a Service (IaaS) provides hardware resources via the cloud. It can help an
organization limit the size of their hardware footprint and reduce personnel costs.
Platform as a Service (PaaS) provides an easy-to-configure operating system and on-demand
computing for customers.
Physical control of data is a key security control an organization loses with cloud computing.


Software as a Service
Software as a Service (SaaS) includes any software or application provided to users over a
network such as the Internet. Internet users access the SaaS applications with a web browser. It
usually doesn’t matter which web browser or operating system a SaaS customer uses. They could be
using Internet Explorer, Chrome, Firefox, or just about any web browser.
As mentioned previously, web-based email is an example of SaaS. This includes Gmail, Yahoo!
Mail, and others. The service provides all the components of email to users via a simple web
browser.
If you have a Gmail account, you can also use Google Docs, another example of SaaS. Google
Docs provides access to several SaaS applications, allowing users to open text documents,
spreadsheets, presentations, drawings, and PDF files through a web browser.
A talented developer named Lee Graham and I teamed up to create CertApps.com to create study
materials. He’s an Apple guy running a Mac while I’m a Microsoft guy running Windows, and we live
in different states. However, we post and share documents through Google Docs and despite different
locations and different applications running on our individual systems, we’re able to easily
collaborate. One risk is that our data is hosted on Google Docs, and if attackers hack into Google
Docs, our data may be compromised.
A specialized version of SaaS is Management as a Service (MaaS). With MaaS, an organization
is able to outsource management and monitoring of IT resources. For example, a third party can
routinely review logs and provide reports back to the organization.
Multi-tenancy (sometimes referred to as multi-tenant) is a concept associated with cloud
computing. A multi-tenancy architecture uses a single instance of an application accessed by multiple
customers. You can think of this like a single instance of a web browser accessing multiple web sites
in separate tabs. In contrast, single-tenancy architecture creates a separate instance of a SaaS
application for each customer. Using the web browser analogy, you’d have a separate web browser
window for every site you’re visiting. Customer data remains private for customers in both multitenancy and single-tenancy architectures.



Platform as a Service
Platform as a Service (PaaS) provides customers with a preconfigured computing platform they
can use as needed. It provides the customer with an easy-to-configure operating system, combined
with appropriate applications and on-demand computing.
Many cloud providers refer to this as a managed hardware solution.


Infrastructure as a Service
Infrastructure as a Service (IaaS) allows an organization to outsource its equipment
requirements, including the hardware and all of its support operations. The IaaS service provider
owns the equipment, houses it in its data center, and performs all of the required hardware
maintenance. The customer essentially rents access to the equipment and often pays on a per-use
basis.
Many cloud providers refer to this as a self-managed solution. They provide access to a server
with a default operating system installation, but customers must configure it and install additional
software based on their needs. Additionally, customers are responsible for all operating system
updates and patches.
IaaS can also be useful if an organization is finding it difficult to manage and maintain servers in
its own data center. By outsourcing its requirements, the company limits its hardware footprint. It can
do this instead of, or in addition to, virtualizing some of its servers. With IaaS, it needs fewer servers
in its data center and fewer resources, such as power, HVAC, and personnel to manage the servers.

Remember this
Applications such as web-based email provided over the Internet are
Software as a Service (SaaS) cloud-based technologies. Platform as a
Service (PaaS) provides customers with a fully managed platform, which the
vendor keeps up to date with current patches. Infrastructure as a Service
(IaaS) provides customers with access to hardware in a self-managed
platform. Customers are responsible for keeping an IaaS system up to date.

Public Versus Private Cloud
Public cloud services are available from third-party companies. For example, Dropbox and
Google operate file-hosting services. Some services are available free and some services cost
money. For example, Google offers 15 GB of free storage, but if you want additional storage, you can
purchase it from Google.
A private cloud is set up for specific organizations. For example, the Shelbyville Nuclear Power
Plant might decide it wants to store data in the cloud, but does not want to use a third-party vendor.
Instead, the plant chooses to host its own servers and make these servers available to internal
employees through the Internet.
Not all cloud implementations fit exactly into these definitions through. A hybrid cloud is a
combination of two or more clouds. They can be all private, all public, or a combination. These
retain separate identities to help protect resources in the private cloud. However, they are bridged
together, often in such a way that it is transparent to the users.



Source: Darril Gibson Book Sec+

Cloud Computing

cloud
computing simply refers to accessing computing resources via a different location than your local
computer. In most situations today, you’re accessing these resources through the Internet.
As an example, if you use web-based email such as Gmail, you’re using cloud computing. More
specifically, the web-based mail is a Software as a Service cloud computing service. You know that
you’re accessing your email via the Internet, but you really don’t know where the physical server
hosting your account is located. It could be in a data center in the middle of Virginia, tucked away in
Utah, or just about anywhere else in the world.
Cloud computing is very useful for heavily utilized systems and networks. As an example,
consider the biggest shopping day in the United States—Black Friday, the day after Thanksgiving,
when retailers go into the black. Several years ago, Amazon.com had so much traffic during the
Thanksgiving weekend that its servers could barely handle it. The company learned its lesson, though.
The next year, it used cloud computing to rent access to servers specifically for the Thanksgiving
weekend, and, despite increased sales, it didn’t have any problems.
As many great innovators do, Amazon didn’t look on this situation as a problem, but rather an
opportunity. If it needed cloud computing for its heavily utilized system, other companies probably
had the same need. Amazon now hosts cloud services to other organizations via its Amazon Elastic
Compute Cloud (Amazon EC2) service. Amazon EC2 combines virtualization with cloud computing
and they currently provide a wide variety of services via Amazon EC2.


Platform as a Service (PaaS)
• No servers, no software, no maintenance team, no HVAC
• Someone else handles the platform, you handle the product
• Salesforce.com

Software as a Service (SaaS) 
• On-demand software
• No local installation
• Google Mail

Infrastructure as a service (IaaS)
• Sometimes called Hardware as a Service (HaaS)
• Outsource your equipment
• Web server and email server providers

Sunday, September 27, 2015

Patching

Microsoft releases patches on Patch Tuesday, many attackers go to work. They
read as much as they can about the patches, download them, and analyze them. They often attempt to
reverse engineer the patches to determine exactly what the patch is fixing.
Next, the attackers write their own code to exploit the vulnerability on unpatched systems. They
often have exploits attacking systems the very next day—Exploit Wednesday. Because many
organizations take more than a single day to test the patch before applying it, this gives the attackers
time to attack unpatched systems. For organizations without a patch management program, it gives
attackers much longer to attack unpatched systems.
Additionally, some attackers discover unknown exploits before Patch Tuesday. They recognize
that Microsoft will be releasing patches on the second Tuesday of the month, so they wait until the
second Wednesday before launching major attacks to exploit the vulnerability. Unless Microsoft
releases an out-of-band patch, this gives them a full month to exploit systems before a patch is
available.

Understanding Virtualization

Understanding Virtualization

Virtualization allows multiple servers to operate on a single physical host. They provide
increased availability with various tools such as snapshots and easy restoration.

Virtualization is a technology that has been gaining a lot of popularity in recent years. It allows
you to host one or more virtual systems, or virtual machines (VMs), on a single physical system. With today’s technologies, you can actually host an entire virtual network within a single physical system and organizations are increasingly using virtualization to reduce costs.
When discussing VMs and studying for the CompTIA Security+ exam, you should understand the
following terms:

Hypervisor. The software that creates, runs, and manages the VMs is the hypervisor. Several
virtualization technologies currently exist, including VMware, Microsoft Hyper-V, Windows
Virtual PC (VPC), and Oracle VM VirtualBox. All of these have their own hypervisor
software.

Host. The physical server hosting the VMs is the host. It requires more resources than a
typical system, such as multiple processors, massive amounts of RAM, fast and abundant hard
drive space, and one or more fast network cards. Although these additional resources increase
the cost of the host, it is still less expensive than paying for multiple physical systems. It also
requires less electricity, less cooling, and less physical space.

Guest. Operating systems running on the host system are guests or guest machines. Most
hypervisors support several different operating systems, including various Microsoft
operating systems and various Linux distributions. Additionally, most hypervisors support
both 32-bit and 64-bit operating systems.
Patch compatibility. It’s important to keep VMs patched and up to date. Patches applied to
physical systems are compatible with virtual systems.
Host availability/elasticity. Elasticity refers to the ability to resize computing capacity based
on the load. For example, imagine one VM has increased traffic. You can increase the amount
of processing power and memory used by this server relatively easily. This allows you to
ensure it remains available even with the increased demand.
Snapshots

Snapshots provide you with a copy of the VM at a moment in time, which you can use as a
backup. If the VM develops a problem, you can revert the image to the state it was in when you took
the snapshot. You are still able to use the VM just as you normally would. However, after taking a
snapshot, the hypervisor keeps a record of all changes to the VM.

Administrators commonly take snapshots of systems prior to performing any risky operation.
Risky operations include applying patches or updates, and installing new applications. Ideally, these
operations do not cause any problems, but occasionally they do. By creating snapshots before these
operations, administrators can easily revert the system to the previous state.


Note: Virtualization allows multiple virtual servers to operate on a single physical
server. It provides increased availability with lower operating costs.
Additionally, virtualization provides a high level of flexibility when testing
security controls, updates, and patches because they can easily be reverted
using snapshots.



From Darriel Gibson's Sec + Book

System Image (Gold Image)

Using Imaging for Baselines
One of the most common methods of deploying systems is with images. An image is a snapshot
of a single system that administrators deploy to multiple other systems. Imaging has become an
important practice for many organizations because it streamlines deployments while also ensuring
they are deployed in a secure manner.

Capturing and deploying images
1. Administrators start with a blank source system. They install and configure the operating
system, install and configure any desired applications, and modify security settings.
Administrators perform extensive testing to ensure the system works as desired and that it
is secure before going to the next step.

2. Next, administrators capture the image. Symantec Ghost is a popular imaging application,
and Windows Server 2012 includes free tools many organizations use to capture and
deploy images. The captured image is simply a file that can be stored on a server or
copied to external media, such as a DVD or external USB drive.

3. In step 3, administrators deploy the image to multiple systems. When used within a
network, administrators can deploy the same image to dozens of systems during an initial
deployment, or to just a single system to rebuild it. The image installs the same
configuration on the target systems as the original source system created in step 1.
Administrators will often take a significant amount of time to configure and test the source
system. They follow the same hardening practices discussed earlier and often use security and
configuration baselines. If they’re deploying the image to just a few systems such as in a classroom
setting, they may create the image in just a few hours. However, if they’re deploying it to thousands of systems within an organization, they may take weeks or months to create and test the image. Once
they’ve created the image, they can deploy it relatively quickly with very little administrative effort.
Imaging provides two important benefits:

Secure starting point. The image includes mandated security configurations for the system.
Personnel who deploy the system don’t need to remember or follow extensive checklists to
ensure that new systems are set up with all the detailed configuration and security settings.
The deployed image retains all the settings of the original image. Administrators will still
configure some settings, such as the computer name, after deploying the image.
Reduced costs. Deploying imaged systems reduces the overall maintenance costs and
improves reliability. Support personnel don’t need to learn several different end-user system
environments to assist end users. Instead, they learn just one. When troubleshooting, support
personnel spend their time focused on helping the end user rather than trying to learn the
system configuration. Managers understand this as reducing the total cost of ownership (TCO)
for systems.

Many virtualization tools include the ability to convert an image to a virtual system. In other
words, once you create the image, you can deploy it to either a physical system or a virtual system.
From a security perspective, there is no difference how you deploy it. If you’ve locked down the
image for deployment to a physical system, you’ve locked it down for deployment to a virtual system.

Imaging isn’t limited to only desktop computers. You can image any system, including servers.
For example, consider an organization that maintains 50 database servers in a large data center. The
organization can use imaging to deploy new servers or as part of its disaster recovery plan to restore
failed servers. It is much quicker to deploy an image to rebuild a failed server than it is to rebuild a
server from scratch. As long as administrators keep the images up to date, this also helps ensure the
recovered server starts in a secure state.

Configuration Baselines
A configuration baseline identifies the configuration settings for a system. This includes settings
such as printer configuration, application settings, and TCP/IP settings. This is especially useful when
verifying proper operation of a system. As an example, if a server is no longer operating correctly, it
might be due to a configuration change. Administrators might be able to identify the problem by
comparing the current settings against the baseline and correcting any discrepancies.
The differences between a configuration baseline and a security baseline can be a little fuzzy.
The security baseline settings are strictly security related. The configuration baseline settings ensure
consistent operation of the system. However, because the configuration baseline contributes to
improved availability of a system, which is part of the security triad, it also contributes to overall
security.
An important consideration with a configuration baseline is keeping it up to date. Administrators
should update the configuration baseline after changing or modifying the system. This includes after
installing new software, deploying service packs, or modifying any other system configuration
settings.

Based on Darril Gibson's Security+ book.

Saturday, September 26, 2015

SYNC flood Attack

The SYN flood attack is a common denial-of-service (DoS) attack. The three way
handshake to establish a session. As a reminder, one system sends a SYN packet, the second
system responds with a SYN/ACK packet, and the first system then completes the handshake with an
ACK packet. However, in a SYN flood attack, the attacker sends multiple SYN packets but never
completes the third part of the TCP handshake with the last ACK packet.

Friday, September 25, 2015

OSI Layers Model

Understanding the Layers
As shown in Table 3.2, the OSI model has seven layers. Many people use mnemonics to
memorize the layers. For example, “All People Seem To Need Data Processing” works for some
people. The first letter in each of the words represents the first letter of the layer. The A in All is for
Application, the P in People is for Presentation, and so on. Another common mnemonic is “Please Do
Not Throw Sausage Pizza Away” (for Physical, Data Link, Network, Transport, Session,
Presentation, and Application).
After mastering the mnemonic, you also need to remember which layer is Layer 1, and which
layer is Layer 7. This memory technique may help. You may have heard about a “Layer 8 error.” This
is another way of saying “user error” and users interact with applications. In other words, a user on
the mythical Layer 8 interacts with applications, which are on Layer 7. I don’t mean to belittle users
or user errors—I make my fair share of errors. However, this memory trick has helped me and many
other people remember that the Application layer is Layer 7.
The following sections provide a short synopsis of the OSI model. If you’d like to dig deeper,
check out the “Open System Interconnection Protocols” section on Cisco’s DocWiki site at
http://docwiki.cisco.com/wiki/Open_System_Interconnection_Protocols.

Layer 1: Physical
The Physical layer is associated with the physical hardware. It includes specifications for cable
types, such as 1000BaseT, connectors, and hubs. Computing devices such as computers, servers,
routers, and switches transmit data onto the transmission medium in a bit stream. This bit stream is
formatted according to specifications at higher-level OSI layers.

Layer 2: Data Link
The Data Link layer is responsible for ensuring that data is transmitted to specific devices on the
network. It formats the data into frames and adds a header that includes MAC addresses for the
source and destination devices. It adds frame check sequence data to the frame to detect errors. This
does not support error correction though. The Data Link layer simply discards frames with detected
errors. Flow control functions are also available on this layer.
Switches operate on this layer. As a reminder, computer NICs have a MAC assigned and
switches map the computer MAC addresses to physical ports on the switch. Systems use ARP to
resolve IPv4 addresses to MAC addresses, and NDP to resolve IPv6 addresses to MAC addresses.
VLANs are defined on this layer.

Layer 3: Network
The Network layer uses logical addressing in the form of IP addresses at this layer. This
includes both IPv4 addresses and IPv6 addresses. Packets identify where the traffic originated (the
source IP address) and where it is going (the destination IP address). Other protocols that operate on
this layer are IPsec and ICMP. Routers and Layer 3 switches operate on this layer.

Layer 4: Transport
The Transport layer is responsible for transporting data between systems, commonly referred to
as end-to-end connections. It provides reliability with error control, flow control, and segmentation
of data. TCP and UDP operate on this layer.

Layer 5: Session
The Session layer is responsible for establishing, maintaining, and terminating sessions between
systems. In this context, a session refers to an extended connection between two systems sometimes
referred to as dialogs or conversations. As an example, if you log on to a web page, the Session layer
establishes a connection with the web server and keeps it open while you’re interacting with the web
pages. When you close the pages, the Session layer terminates the session.
If you’re like many users, you probably have more than one application open at a time. For
example, in addition to having a web browser open, you might have an email application open. Each
of these is a different session, and the Session layer manages them separately.

Layer 6: Presentation
The Presentation layer is responsible for formatting the data as needed by the end-user
applications. For example, American Standard Code for Information Interchange (ASCII) and
Extended Binary Coded Decimal Interchange Code (EBCDIC) are two standards that define codes
used to display characters on this layer.

Layer 7: Application
The Application layer is responsible for displaying information to the end user in a readable
format. Application layer protocols typically use this layer to determine if sufficient network
resources are available for an application to operate on the network.
Note that this layer doesn’t refer to end-user applications directly. However, many end-user
applications use protocols defined at this layer. For example, a web browser interacts with DNS
services to identify the IP address of a web site name. Similarly, HTTP transmits web pages over the
Internet on this layer, which are ultimately displayed in a web browser.

Some of the protocols that operate on this layer are DNS, FTP, FTPS, HTTP, HTTPS, IMAP4,
LDAP, POP3, RDP, SCP, SFTP, SMTP, SNMP, SSH, Telnet, and TFTP. SCP isn’t defined in an RFC
so you won’t find a definitive source indicating which layer it operates on. However, SCP uses SSH
for data transfer and SSH operates on Layer 7. Similarly, RDP is a proprietary protocol and
Microsoft doesn’t link it to an OSI layer. However, RDP is listed as an Application layer protocol on
the TCP/IP model.
Many advanced devices are application aware and operate on all of the layers up to the
Application layer. This includes proxies, application-proxy firewalls, web application firewalls,
web security gateways, and UTM security appliances.

How web page is displayed

How web page is displayed.

Imagine that you decide to visit the web site http://GetCertifiedGetAhead.com
using your web browser so you type the URL into the browser, and the web page appears. Here are
the details of what is happening. Figure 3.3 provides an overview of how this will look and the
following text explains the process.

Your computer creates a packet with source and destination IP addresses and source and
destination ports. It queries a DNS server for the IP address of GetCertifiedGetAhead.com and learns
that the IP address is 72.52.206.134. Additionally, your computer will use its IP address as the source
IP address. For this example, imagine your computer’s IP address is 70.150.56.80.

Because the web server is serving web pages using HTTP and the well-known port is used, the
destination port is 80. Your computer will identify an unused port in the dynamic and private ports
range (a port number between 49,152 and 65,535) and map that port to the web browser. For this
example, imagine it assigns 49,152 to the web browser. It uses this as the source port.
At this point, the packet has both destination and source data as follows:
Destination IP address: 72.52.206.134 (the web server)
Destination port: 80
Source IP address: 70.150.56.80 (your computer)
Source port: 49,152


TCP/IP uses the IP address (72.52.206.134) to get the packet to the GetCertifiedGetAhead web
server. When it reaches the web server, the server looks at the destination port (80) and determines
that the packet needs to go to the web server program servicing HTTP. The web server creates the
page and puts the data into one or more return packets. At this point, the source and destinations are swapped because the packet is coming from the server back to you:
Destination IP address: 70.150.56.80 (your computer)
Destination port: 49,152
Source IP address: 72.52.206.134 (the web server)
Source port: 80

Again, TCP/IP uses the IP address to get the packets to the destination, which is your computer at
this point. Once the packets reach your system, it sees that port 49,152 is the destination port. Because
your system mapped this port to your web browser, it sends the packets to the web browser, which
displays the web page.


Source: Darril Gibson 2014 book, CompTIA Security+ SY)-401, Page 249

New credit card transaction

How new credit card transaction happens.

It’s intriguing how this is accomplished. The card doesn’t require its own power source. Instead, the electronics in the card include a capacitor and a coil that can accept a charge from the proximity card reader. When you pass the card close to the reader, the reader excites the coil and stores a charge in the capacitor. Once charged, the card transmits the information to the reader using a radio frequency.

Thursday, September 17, 2015

MAC access copntrol

Which of the following concepts best describes the mandatory access control model?
ANSWER

Biba
Lattice - THE CORRECT ANSWER
Clark-Wilson
Bell-La Padula
I DON'T KNOW YET

WHAT YOU NEED TO KNOW

Mandatory access control has two common implementations: rule-based access control and lattice-based access control. Lattice-based access control is used for more complex determinations of object access by subjects; this is done with advanced mathematics that creates sets of objects and subjects and defines how the two interact.

Bell-La Padula is a state machine model used for enforcing access control in government applications. It is a less-common, multilevel security derivative of mandatory access control. This model focuses on data confidentiality and controlled access to classified information.

The Biba Integrity Model describes rules for the protection of data integrity.

Clark-Wilson is another integrity model that provides a foundation for specifying and analyzing an integrity policy for a computing system.

Firewall Rule - Implicit deny

The principle of implicit deny is used to deny all traffic that isn’t explicitly (or specifically) allowed or denied. In other words, if the type of traffic hasn’t been associated with a rule, the implicit deny rule will kick in, thus protecting the device.

Access control lists are used to filter packets and will include rules such as permit any or explicit denies to particular IP addresses..

Cloud computing

Cloud computing relies on an external service provider. Your organization would still be able to logically manipulate data services and have administrative control over them similar to if the data and services were administered locally. But physical control would be lost and the organization would rely solely on the cloud computing service for hardware, servers, network devices, and so on.

Webmail can be classified as software as a service (SaaS). This is when an external provider (in the cloud) offers e-mail services that a user can access with a web browser. Examples include Gmail and Hotmail.

Platform as a service (PaaS) is when a cloud-based service provider offers an entire application development platform that can be accessed via a web browser or other third-party application.

Infrastructure as a service (IaaS) is when a cloud-based service provider offers an entire network located on the Internet.

Symmetrical and Asymmetrical encryption

Symmetrical encryption is also referred to as secret key encryption, shared key, private key, single key and even session key.
Asymmetrical encryption uses private and public key pairs, only one of which is secret.
A one-way function is easy to compute when being generated but difficult or impossible to compute in reverse.
Quantum encryption, also known as quantum cryptography, uses quantum mechanics to guarantee secure communications. It enables two parties to produce a shared, random-bit string, known only to them, that encrypts and decrypts messages.



-----------------------------------------------------------------



Comparing Symmetric Encryption to a Door Key
Occasionally, security professionals compare symmetric keys to a house key,
and this analogy helps some people understand symmetric encryption a little
better. For example, imagine Marge moves into a new home. She’ll receive a
single key that she can use to lock and unlock her home. Of course, Marge
can’t use this key to unlock her neighbor’s home.

Later, Marge marries Homer, and Homer moves into Marge’s home. Marge
can create a copy of her house key and give it to Homer. Homer can now
use that copy of the key to lock and unlock the house. By sharing copies of
the same key, it doesn’t matter whether Marge or Homer is the one who
locks the door; they can both unlock it.

Similarly, symmetric encryption uses a single key to encrypt and decrypt
data. If a copy of the symmetric key is shared, others who have the key can
also encrypt and decrypt data.

Testing type

A gray-box test is when you are given limited information about the system you are testing.

Black-box testers are not given logins, source code, or anything else, though they may know the functionality of the system.

White-box testers are given logins, source code, documentation, and more.

Hypervisor and Visualization

What is the best reason why security researchers may choose to use virtual machines?

The best reason why security researchers use virtual machines is to offer an environment where malware might be executed but with minimal risk to the equipment. The virtual machine is isolated from the actual operating system, and the virtual machine can simply be deleted if it is affected by viruses or other types of malware.

The best reason is that it offers the isolated environment where a malicious activity can occur but be easily controlled and monitored.

Additional Learning
Hypervisor

Most virtual machine software is designed specifically to host and be available to more than one VM. A byproduct is the intention that all VMs are able to communicate with each other quickly and efficiently. This concept is summed up by the term hypervisor . A hypervisor allows multiple virtual operating systems (guests) to run at the same time on a single computer. It is also known as a virtual machine manager (VMM). The term hypervisor is often used ambiguously.

Type 1: Native— The hypervisor runs directly on the host computer’s hardware. Because of this it is also known as “bare metal.”

Type 2: Hosted— This means that the hypervisor runs within (or “on top of”) the operating system.

Generally, Type 1 is a much faster and much more efficient solution than Type 2. It is also more elastic, meaning that environments using Type 1 hypervisors can usually respond to quickly changing business needs by adjusting the supply of resources as necessary. Because of this elasticity and efficiency, Type 1 hypervisors are the kind used by web-hosting companies and by companies that offer cloud computing solutions such as infrastructure as a service (IaaS).



Virtualization is a broad term that includes the use of virtual machines and the extraction of computer resources.



When a web script runs in its own environment for the express purpose of not interfering with other processes, it is known as running in a sandbox. Often, the sandbox will be used to create sample scripts before they are actually implemented.

Quarantining is a method used to isolate viruses.

A honeynet is a collection of servers used to attract hackers and isolate them in an area where they can do no damage.

SEC+ :- Virtualized browsers

Virtualized browsers can protect the OS that they are installed?

he beauty of a virtualized browser is that regardless of whether a virus or other malware damages it, the underlying operating system will remain unharmed. The virtual browser can be deleted and a new one can be created; or if the old virtual browser was backed up previous to the malware attack, it can be restored. This concept applies to entire virtual operating systems as well, if configured properly.

Windows - Active directory

What do you need backed up on a domain controller to recover Active Directory?

The System State needs to be backed up on a domain controller to recover the Active Directory database in the future. The System State includes system files but does not include the entire operating system. If a server fails, the operating system would have to be reinstalled, and then the System State would need to be restored.

Windows - Find version number of your windows machine

In Windows, which of the following commands will not show the version number?
ANSWER
Winver
Msinfo32.exe
Wf.msc  - THE CORRECT ANSWER
Systeminfo
I DON'T KNOW YET

WHAT YOU NEED TO KNOW

Of the answers listed, the only one that will not show the version number is wf.msc. That brings up the Windows Firewall with Advanced Security.

All of the other answers will display the version number in Windows.

BYOD concern

Which of the following is a concern based on a user taking pictures with a smartphone?

Geotagging is a concern based on a user taking pictures with a mobile device such as a smartphone. This is because the act of geotagging utilizes GPS, which can give away the location of the user.

Application whitelisting is when there is an approved list of applications for use by mobile devices. Usually implemented as a policy, if the mobile device attempts to open an app that is not on the list, the process will fail, or the system will ask for proof of administrative identity.

BYOD stands for bring your own device, a technological concept where organizations allow employees to bring their personal mobile devices to work and use them for work purposes.

MDM stands for mobile device management, a system that enables a security administrator to configure, update, and secure multiple mobile devices from a central location.
 Additional Learning
BYOD Concerns

Around 2011, organizations began to allow employees to bring their own mobile devices into work and connect them to the organization’s network (for work purposes
only, of course). This “bring your own device” concept has since grown into a more popular method of computing for many organizations. It is enticing from a budgeting standpoint, but can be very difficult on the security administrator, and possibly on the user as well.

In order to have a successful BYOD implementation, the key is to implement storage segmentation —a clear separation of organizational and personal information,
applications, and other content. It must be unmistakable where the data ownership line occurs. For networks with a lot of users, consider third-party offerings from companies that make use of mobile device management (MDM) platforms. These are centralized software solutions that can control, configure, update, and secure remote mobile devices such as Android, iOS, BlackBerry, and so on, all from one administrative console. The MDM software can be run from a server within the organization, or administered within the cloud. It makes the job of a mobile IT security administrator at least manageable. From the central location, the security administrator can implement patch management and antivirus management such as updates to the virus definitions.

The admin can also set up more secure levels of mobile device access control. Access control is the methodology used to allow access to computer systems. For larger organizations, MDM software makes it easy for an admin to view inventory control, such as how many devices are active for each of the mobile operating systems used. It also makes it simpler to track assets, such as the devices themselves, and the types of data each contains. In addition, MDM software makes it less complicated to disableunused features on multiple devices at once, thereby increasing the efficiency of the devices, reducing their footprint, and ultimately making them more secure. For instance, an employee who happens to have both a smartphone and a tablet capable of making cellular calls doesn’t necessarily need the latter. The admin could disable the tablet’s cellular capability, which would increase battery efficiency as well as security for that device. Finally, application control becomes easier as well. Applications can be installed, uninstalled, updated, and secured from that central location. Even devices’ removable storage (often USB-based) can be manipulated—as long as the removable storage is currently connected to the device.

Policies that need to be instituted include an acceptable use policy, a data ownership policy, and a support ownership policy. In essence, these define what a user is allowed to do with the device (during work hours), who owns what data and how that data is separated, and under what scenarios the organization takes care of technical support for the device as opposed to the user.

SEC+ :- What is the best methods to protect the confidential data on the device?

A smartphone is an easy target for theft. Which of the following are the best methods to protect the confidential data on the device?

Remote wipe and encryption are the best methods to protect a stolen device’s confidential or sensitive information.

GPS can help to locate a device, but it can also be a security vulnerability in general; this will depend on the scenario in which the mobile device is used.

Passwords should never be e-mailed and should not be associated with e-mail.

Tethering is when a mobile device is connected to another computer (usually via USB) so that the other computer can share Internet access, or other similar sharing functionality in one direction or the other. This is great as far as functionality goes, but more often than not can be a security vulnerability.

Screen locks are a decent method of reducing the chance of login by the average person, but they are not much of a deterrent for the persistent attacker.
 Additional Learning
On-boarding and off-boarding


Most employees (of all age groups) are also concerned with how on-board devices (such as the on-board camera) can be used against them with or without their knowledge. Companies that offer BYOD solutions tend to refer to the camera (and photos/video taken) as part of the personal area of the device. However, those same companies will include GPS location as something the company can see, but this can be linked to a corporate login, with GPS tracking the user only when the user is logged in. On-boarding and off-boarding in general are another concern. Essentially, on-boarding is when the security administrator takes control of the device temporarily to configure it, update it, and perhaps monitor it, and off-boarding is when the security administrator relinquishes control of the device when finished with it. It brings up some questions for the employee: When does it happen? How long does it last? How will my device be affected? Are there any architectural/infrastructural concerns? For example, will the BYOD solution change the core files of my device? Will an update done by a person when at home render the device inactive the next day at work? That’s just the tip of the iceberg when it comes to questions and concerns about BYOD. The best course of action is for an organization to set firm policies about all of these topics.

Database - NoSQL

NoSQL
There are, however, other databases that don’t use SQL (or use code in addition to SQL). Known as NoSQL databases, they offer a different mechanism for retrieving data than their relational database counterparts. These are commonly found in virtual systems provided by cloud-based services. 

While they are usually resistant to SQL injection, there are NoSQL injection attacks as well. Because of the type of programming used in NoSQL, the potential impact of a NoSQL injection attack can be greater than that of a SQL injection attack. An example of a NoSQL injection attack is the JavaScript Object Notation (JSON) injection attack. But, NoSQL databases are also vulnerable to brute-force attacks (cracking of passwords) and connection pollution (a combination of XSS and code injection techniques). 

Methods to protect against NoSQL injection are similar to the methods mentioned for SQL injection. However, because NoSQL databases are often used within cloud services, a security administrator for a company might not have much control over the level of security that is implemented. In these cases, careful scrutiny of the service-level agreement (SLA) between the company and the cloud provider is imperative.

Input validation is the best programming technique to stop buffer overflow attacks and is also used to prevent SQL injection attacks.

A sandbox is used to run the web scripts in their own testing environment.

Backdoors are used in computer programs to bypass normal authentication. Backdoor analysis includes checking the operating system, applications, and firmware on devices and making sure they are updated.

Friday, September 4, 2015

Windows - List Inactive AD User Accounts

Solution: - You can do this via dsquery from a DC.

1. Log into a DC.
2. Open a command prompt.
3. Run the command:
4. C:> dsquery user -inactive 60

This will give you a listing of all user accounts that have been inactive for 60+ days.
A cool script to use:

date /t
time /t
echo.
echo Stale AD Data Report
echo.
if not exist %windir%\ntds\ goto :NOT_DC
echo Computers, Inactive 60 Days
echo -------------------------------
dsquery computer -inactive 60
echo.
echo Users, Inactive 60 Days
echo -------------------------------
dsquery user -inactive 60
goto :EOF
:NOT_DC
echo This is not a DC, did not find:
echo %windir%\ntds\


Source:- http://www.puryear-it.com/blog/2012/10/26/list-inactive-ad-user-accounts/

RHEL - List users currently login into the system

How to find who is currently logged into the system.

Here, are list of commands which lists the users who are currently logged in.

Who command
$ who or w or who -u

Last command -
$ last

Using finger command
$ finger


listusers - lists the users on the system
# listusers

RHEL7 - Configure smb access

RHEL7 - Configure smb access
Perform following task on your server
1. Share the /smbshared directory via SMB on serverX
2. Set up your SMB server as a member of TESTGROUP workgroup
3. Named your share as netdata
4. Make available the share netdata to example.com domain clients only
5. Set up the share netdata browseable
Now,


- susan must have read access to the share, authenticating with the same password “password”, if necessary
-  Configure the serverX to share /opstack with SMB share name must be cluster.
-  The user frankenstein has readable,writeable,accesseable to the /opstack SMB share.
-  The user martin has read access to the /opstack SMB share.
-  Both users should have the SMB passwd "SaniTago".

yum install samba samba-client

systemctl start smb nmb
systemctl enable smb nmb

firewall-cmd --permanent --add-service=samba
firewall-cmd --reload

mkdir -p /smbshared

semanage fcontext -a -t samba_share_t "/smbshared(/.*)?"
restorecon -Rv /smbshared

setfacl –m u:susan:r-X /smbshared

vi /etc/samba/smb.conf

workgroup = TESTGROUP
[netdata]
comment = netdata share
path = /smbshared
browseable = yes
valid users = susan
read only =yes
hosts allow = 172.25.1. #(ifconfig and get your ip and only use the 3 octets)

grep –i “susan” /etc/passwd(It it return nothing then create a user first)

useradd -s /sbin/nologin susan
smbpasswd -a susan



mkdir –p /opstack

semanage fcontext -a -t samba_share_t "/ opstack (/.*)?"
restorecon -Rv / opstack

vim /etc/samba/smb.conf[cluster]
comment = opstack share
path = /opstack
write list = frankenstein
writable = no

useradd -s /sbin/nologin frankenstein
useradd -s /sbin/nologin martin

smbpasswd –a Frankenstein
smbpasswd –a martin
#Allow Frankenstein write access & Martin read access to the directory
[indent=1]1) setfacl -m u:frankenstein:rwX /opstack/[/indent]
[indent=1]2) setfacl -m u:frankenstein:r-X /opstack/
[/indent]

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
smb multiuser mount.
-  Mount the samba share /opstack permanently beneath /mnt/smbspace on desktopX as a multiuser mount.
-  The samba share should be mounted with the credentials of frankenstein.

# yum –y install cifs-utils samba-client
# mkdir –p /mnt/smbspace

# vi /root/smb-multiuser.txt

username=frankenstein
password= SaniTago

# chmod 0600 /root/multiuser.txt

# vi /etc/fstab
//server1/cluster /mnt/smbspace cifs defaults,sec=ntlmssp,credentials=/root/smb-multiuser.txt,multiuser 0 0 

RHEL7 - Setup and configure Link aggregation

RHEL7 - Link aggregation - It is pretty same concept like on Solaris LDOM - link aggregation.

Setup and configure link aggregation on RHEL7 server

Serer Address: 192.168.10.120/24
Client Address: 192.168.10.121/24


Configure on your server
# nmcli con add type team con-name Team1 ifname Team1 config '{"runner": {"name": "activebackup"}}'
# nmcli con modify Team1 ipv4.addresses 192.168.10.120/24
# nmcli con modify Team1 ipv4.method manual

# nmcli con add type team-slave con-name Team1-slave1 ifname eth1 master Team1
# nmcli con add type team-slave con-name Team1-slave2 ifname eth2 master Team1

# nmcli con up Team1
# nmcli con up Team1-slave1
# nmcli con up Team1-slave2

Test the connection
# teamdctl Team1 state

Disconnect the first interface
# nmcli dev dis eth1
# nmcli con up Team1-slave1
# teamnl Team1 ports
# teamnl Team1 getoption activeport
# teamnl Team1 setoption activeport PORT_NUMBER

Now, set up a client and once done, test the connectivity between client and server.
# ping –I Team1 192.168.10.121


Set up and configure on client
# nmcli con add type team con-name Team1 ifname Team1 config '{"runner": {"name": "activebackup"}}'
# nmcli con modify Team1 ipv4.addresses 192.168.10.121/24
# nmcli con modify Team1 ipv4.method manual

# nmcli con add type team-slave con-name Team1-slave1 ifname eth1 master Team1
# nmcli con add type team-slave con-name Team1-slave2 ifname eth2 master Team1

# nmcli con up Team1
# nmcli con up Team1-slave1
# nmcli con up Team1-slave2

Testing the connectivity
# teamdctl Team1 state

# nmcli dev dis eth1
# nmcli con up Team1-slave1
# teamnl Team1 ports
# teamnl Team1 getoption activeport
# teamnl Team1 setoption activeport PORT_NUMBER

Verify the connectivity with server.
# ping –I Team1 192.168.10.120

Thursday, September 3, 2015

RHEL:- Copy, move, remove file directory

Create file, directories and link, copy and move.
a. Create/touch: touch file
$ gedit filename [ using GUI ]
$ vi filename
$ touch /tmp/i_was_here
b. Move/rename: mv sourcefile destfile
$ mv /tmp/httpd.conf /var/httpd/conf/httpd.conf
c. Remove: rm file [ rename/delete ]
$ rm /tmp/httpd.conf.old
$ rmdir /tmp/mydir (must be emply)
$ rm -rf /tmp/mydir
$ \rm -rf /tmp/mydir
d. Copy: cp sourecfile destfile
$ cp httpd.conf httpd.conf.date
$ cp -r /tmp/myfir /var/www/html/web

e. Create hard and soft links
1. Soft link: ln -s sourecfile destlink
$ ln -s /export/home/users /home/users
2. Hard link: ln sourecfile destlink
$ ln /export/home/users /home/users # must be same FS.
# note: because hardlink usage same ipnode, so can't cross the FS

Q. What is the difference between soft and hard link? Why do you create soft/hard link?

Suraya Namaskara - The king of Yoga Asana's

Suraya Namaskara - The king of Yoga Asana's


Surya Namaskara: 

Surya Namaskara is of salutations to the sun. It finds its roots in worship of Surya, the sun god. This sequence of activities and poses can be practiced as a physical exercise or a complete "sadhana" which incorporates exercise postures, breathing exercises, and profound meditation.

The rewards of Surya Namaskara are many. It is the best exercise for achieving fitness. It includes a warm up session as well as the main exercise. In this present day and time, as every individual faces a time-crunch it is difficult to exercise regularly, but with just fifteen minutes of this yoga, maximum results and infinite benefits can be obtained.

These yogic asanas directly affect the endocrine system and the digestive system, balance any hormonal imbalances and give a quick shot of vigor to the body in a matter of minutes. In about three months’ time and with dedicated practice of Surya Namaskara the suppleness and litheness of the body can be increased.

Cultural Beliefs:

If one is devotionally inclined then the exercise can be done with full knowledge of the significance of worshipping the sun. This will purify the heart and mind. Surya namaskara also helps to bring the flow of pranic or bioplasmic energy into balance and remove blockages in the nadis through which it flows. Surya namaskara is an excellent practice with which to start the day.

Benefits: 
  • It improves the blood circulation of all the important organs of the body.
  • Increase the level of oxygen in blood, also improves breathing capacity.
  • Improves functioning of heart and lungs.
  • Strengthens the muscles of the arms and waist.
  • Makes the spine and waist more flexible.
  • Helps in reducing the fat around the abdomen and thus reduces weight.
  • Improves digestion.
  • Improves concentration power.

 Scientific Reasons:

Studies showed that the cardio respiratory parameters significantly change after the practice of Surya namaskar. In general, yogic practices have been proposed to reduce resting heart rate and blood pressure.

Surya namaskara increases the heartbeat and the workings of the whole circulatory system, helping to eliminate waste materials from the body. Areas of sluggish blood are also removed and replaced by purified and oxygenated blood. All the cells of the body receive extra nutrition enabling them to function more efficiently.

Most people tend to breathe superficially in short and shallow gasps. This starves the body of the oxygen it requires for perfect health. Carbon dioxide also tends to accumulate in the system. Further underutilization of the lung capacity allows a build-up of germs which can lead to various illnesses. 


Surya namaskara accentuates the exchange of air to and from the lungs, opens and expands the intricate alveoli, or air sacs, of the lung tissue and exercises the muscles of the surrounding chest region. The lungs are emptied of impurities and stale air and the body and brain are revitalized by the extra supply of oxygen they receive. One can almost feel the extra super-charge of energy.

Wednesday, September 2, 2015

Introduction to Cloud Computing: SaaS, PaaS, IaaS

Understanding the Cloud Computing Stack: SaaS, PaaS, IaaS


Executive Summary


Cloud Computing is a broad term that describes a broad range of services. As with other significant developments in technology, many vendors have seized the term “Cloud” and are using it for products that sit outside of the common definition. In order to truly understand how the Cloud can be of value to an organization, it is first important to understand what the Cloud really is and its different components. Since the Cloud is a broad collection of services, organizations can choose where, when, and how they use Cloud Computing. In this report we will explain the different types of Cloud Computing services commonly referred to as Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) and give some examples and case studies to illustrate how they all work. We will also provide some guidance on situations where particular flavors of Cloud Computing are not the best option for an organization.

The Cloud Computing Stack


Cloud Computing is often described as a stack, as a response to the broad range of services built on top of one another under the moniker “Cloud”. The generally accepted definition of Cloud Computing comes from the National Institute of Standards and Technology (NIST) [1]. The NIST definition runs to several hundred words [2] but essentially says that;
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
What this means in plain terms is the ability for end users to utilize parts of bulk resources and that these resources can be acquired quickly and easily.
NIST also offers up several characteristics that it sees as essential for a service to be considered “Cloud”. These characteristics include;
• On-demand self-service. The ability for an end user to sign up and receive services without the long delays that have characterized traditional IT
• Broad network access. Ability to access the service via standard platforms (desktop, laptop, mobile etc)
• Resource pooling. Resources are pooled across multiple customers [3]
• Rapid elasticity. Capability can scale to cope with demand peaks [4]
• Measured Service. Billing is metered and delivered as a utility service [5]
More than a semantic argument around categorization, we believe that in order to maximize the benefits that Cloud Computing brings, a solution needs to demonstrate these particular characteristics. This is especially true since in recent years there has been a move by traditional software vendors to market solutions as “Cloud Computing” which are generally accepted to not fall within the definition of true Cloud Computing, a practice known as “cloud-washing.”
The diagram below depicts the Cloud Computing stack – it shows three distinct categories within Cloud Computing: Software as a Service, Platform as a Service and Infrastructure as a Service.
In this report we look at all three categories in detail however a very simplified way of differentiating these flavors of Cloud Computing is as follows;
• SaaS applications are designed for end-users, delivered over the web
• PaaS is the set of tools and services designed to make coding and deploying those applications quick and efficient
• IaaS is the hardware and software that powers it all – servers, storage, networks, operating systems
To help understand how these 3 components are related, some have used a transportation analogy;
By itself, infrastructure isn’t useful - it just sits there waiting for someone to make it productive in solving a particular problem. Imagine the Interstate transportation system in the U.S. Even with all these roads built, they wouldn’t be useful without cars and trucks to transport people and goods. In this analogy, the roads are the infrastructure and the cars and trucks are the platform that sits on top of the infrastructure and transports the people and goods. These goods and people might be considered the software and information in the technical realm. [6]
It is important to note that while for illustration purposes this whitepaper draws a clear distinction between SaaS, PaaS and IaaS, the differences between these categories of cloud computing, especially PaaS and IaaS, have blurred in recent months and will continue to do so.[7] Nevertheless, with a general understanding of how these components interact with each other, we will turn our attention in more detail to the top layer of the stack, SaaS.

Software as a Service


Software as a Service (SaaS) is defined as [8];
...software that is deployed over the internet... With SaaS, a provider licenses an application to customers either as a service on demand, through a subscription, in a “pay-as-you-go” model, or (increasingly) at no charge when there is opportunity to generate revenue from streams other than the user, such as from advertisement or user list sales
SaaS is a rapidly growing market as indicated in recent reports that predict ongoing double digit growth [9]. This rapid growth indicates that SaaS will soon become commonplace within every organization and hence it is important that buyers and users of technology understand what SaaS is and where it is suitable.
Characteristics of SaaS
Like other forms of Cloud Computing, it is important to ensure that solutions sold as SaaS in fact comply with generally accepted definitions of Cloud Computing. Some defining characteristics of SaaS include;
• Web access to commercial software
• Software is managed from a central location
• Software delivered in a “one to many” model
• Users not required to handle software upgrades and patches
• Application Programming Interfaces (APIs) allow for integration between different pieces of software
Where SaaS Makes Sense
Cloud Computing generally, and SaaS in particular, is a rapidly growing method of delivering technology. That said, organizations considering a move to the cloud will want to consider which applications they move to SaaS. As such there are particular solutions we consider prime candidate for an initial move to SaaS;
• “Vanilla” offerings where the solution is largely undifferentiated. A good example of a vanilla offering would include email where many times competitors use the same software precisely because this fundamental technology is a requirement for doing business, but does not itself confer an competitive advantage
• Applications where there is significant interplay between the organization and the outside world. For example, email newsletter campaign software
• Applications that have a significant need for web or mobile access. An example would be mobile sales management software
• Software that is only to be used for a short term need. An example would be collaboration software for a specific project
• Software where demand spikes significantly, for example tax or billing software used once a month
SaaS is widely accepted to have been introduced to the business world by the Salesforce [10] Customer Relationship Management (CRM) product. As one of the earliest entrants it is not surprising that CRM is the most popular SaaS application area [11], however e-mail, financial management, customer service and expense management have also gotten good uptake via SaaS.
Where SaaS May Not be the Best Option
While SaaS is a very valuable tool, there are certain situations where we believe it is not the best option for software delivery. Examples where SaaS may not be appropriate include;
• Applications where extremely fast processing of real time data is required
• Applications where legislation or other regulation does not permit data being hosted externally
• Applications where an existing on-premise solution fulfills all of the organization’s needs
Software as a Service may be the best known aspect of Cloud Computing, but developers and organizations all around the world are leveraging Platform as a Service, which mixes the simplicity of SaaS with the power of IaaS, to great effect.

Case Study: SaaS Allows Groupon to Scale Customer Service[12]


Launched in November 2008, Groupon [13] features a daily deal on the best stuff to do, see, eat and buy in more than 500 markets and 40 countries. The company has thousands of employees spread across its Chicago and Palo Alto offices, regional offices in Europe, Latin America, Asia and Africa with local account executives stationed in many cities. Groupon seeks to sell only quality products and services, be honest and direct with customers, and provide exceptional customer service.
“Within a few months of our founding, our customer base exploded,” says Joe Harrow, Director of Customer Service, Groupon. “At first, I was spending 10 percent of my time responding to customer requests. It gradually became a job for several agents. We realized we simply couldn’t go on without a real ticketing solution.”
Convinced that Groupon’s rapid growth would continue, Harrow researched several enterprise-level support solutions. But he didn’t find a good fit.
“The enterprise-level solutions seemed complicated and difficult to set up,” Harrow recalls. “They would have increased our efficiency, but at the cost of hampering the customer experience.” Harrow then searched the web for online support software and found Zendesk [14]. After a quick evaluation of Zendesk, Harrow knew he had the right solution.
“Right off the bat, Zendesk was intuitive to use,” Harrow says. “It seemed more powerful and robust than other online support solutions, and it had been rated very highly in reviews we’d read. Plus, we knew that because it was a web-based solution, it could easily scale to support our increasing volume.”
Groupon now employs more than 150 customer support agents, who handle nearly 15,000 tickets per day. Zendesk’s macros, which are predefined answers to FAQs, are Groupon’s favorite Zendesk feature. These macros help Groupon train its agents to deliver one of the company’s customer service hallmarks: one-touch resolution.
Groupon has also found it easy to integrate Zendesk with other solutions. By integrating Zendesk with GoodData, Groupon has extended and enhanced its reporting – going well beyond the limits of its old spreadsheets. As an example of the sort of scalability that SaaS brings, Groupon recently processed its millionth customer ticket [15].

Platform as a Service


Platform as a Service (PaaS) brings the benefits that SaaS bought for applications, but over to the software development world. PaaS can be defined as a computing platform that allows the creation of web applications quickly and easily and without the complexity of buying and maintaining the software and infrastructure underneath it.
PaaS is analogous to SaaS except that, rather than being software delivered over the web, it is a platform for the creation of software, delivered over the web.
Characteristics of PaaS
There are a number of different takes on what constitutes PaaS but some basic characteristics include [16];
• Services to develop, test, deploy, host and maintain applications in the same integrated development environment. All the varying services needed to fulfil the application development process
• Web based user interface creation tools help to create, modify, test and deploy different UI scenarios
• Multi-tenant architecture where multiple concurrent users utilize the same development application
• Built in scalability of deployed software including load balancing and failover
• Integration with web services and databases via common standards
• Support for development team collaboration – some PaaS solutions include project planning and communication tools
• Tools to handle billing and subscription management
PaaS, which is similar in many ways to Infrastructure as a Service that will be discussed below, is differentiated from IaaS by the addition of value added services and comes in two distinct flavours;
1. A collaborative platform for software development, focused on workflow management regardless of the data source being used for the application. An example of this approach would be Heroku, a PaaS that utilizes the Ruby on Rails development language.
2. A platform that allows for the creation of software utilizing proprietary data from an application. This sort of PaaS can be seen as a method to create applications with a common data form or type. An example of this sort of platform would be the Force.com PaaS from Salesforce.com which is used almost exclusively to develop applications that work with the Salesforce.com CRM
Where PaaS Makes Sense
PaaS is especially useful in any situation where multiple developers will be working on a development project or where other external parties need to interact with the development process. As the case study below illustrates, it is proving invaluable for those who have an existing data source – for example sales information from a customer relationship management tool, and want to create applications which leverage that data. Finally PaaS is useful where developers wish to automate testing and deployment services.
The popularity of agile software development, a group of software development methodologies based on iterative and incremental development, will also increase the uptake of PaaS as it eases the difficulties around rapid development and iteration of software.
Some examples of PaaS include Google App Engine [17], Microsoft Azure Services [18], and the Force.com [19] platform.
Where PaaS May Not be the Best Option
We contend that PaaS will become the predominant approach towards software development. The ability to automate processes, use pre-defined components and building blocks and deploy automatically to production will provide sufficient value to be highly persuasive. That said, there are certain situations where PaaS may not be ideal, examples include;
• Where the application needs to be highly portable in terms of where it is hosted
• Where proprietary languages or approaches would impact on the development process
• Where a proprietary language would hinder later moves to another provider – concerns are raised about vendor lock-in [20]
• Where application performance requires customization of the underlying hardware and software

Case Study: Menumate Uses PaaS to Serve Tasty Applications


Menumate [21] is a provider of point of sale hardware and software for the hospitality industry across Australasia. Menumate has taken advantage of the Force.com PaaS to migrate over time a series of legacy applications used in the business.
Daniel Fowlie and Abhinav Keswani are Directors of development house Trineo [22] the company responsible for boutique development for Menumate. Fowlie explains that the use of the Force.com platform has allowed Menumate to centralise, modernise and integrate an otherwise disparate in-house software toolkit.
Keswani feels that a more conventional development approach would require significant infrastructure, connectivity, security and would introduce uptime considerations - whereas the Force.com platform inherently provides these non-functional requirements - allowing Menumate and Trineo to focus purely on developing the needed functionality. Additionally, utilizing a PaaS approach has meant Trineo could take advantage of both existing integrations and automated deployment tools - another example of PaaS easing the development process.
Using PaaS, Trineo have been able to migrate over time a series of legacy applications used in the business. Some of these applications are:
License Key Generation - The Menumate software uses license keys to activate the features that the customer has paid for. The power of the PaaS programming language allowed Menumate to quickly port this code to Force.com where the license keys are linked to the customer record in the Salesforce.com CRM. This allows Sales and Support staff to quickly see the status of licenses.
Enhanced Case Management - A lot of the support cases Menumate were dealing with were orders for consumables. To handle this they had a separate DOS based application that would allow the user to build up an order and create an invoice. Menumate now can add products to a support case and automatically send an invoice to their accounting software using an existing integration product.
Label Printing - Another legacy application was for creating freight labels for sending consumables and hardware to customers. Utilising the PaaS technology, these can now be printed directly from the customer record.

Utilizing a PaaS development environment has resulted in the creation of these applications being significantly faster than would otherwise be the case. In some examples, in the absence of PaaS, the cost of developing the application would have been prohibitive.
PaaS is undoubtedly an exciting and powerful form of Cloud Computing however in terms of market awareness it’s hard to look past Infrastructure as a Service and the rapid growth it’s seeing in the marketplace.

Infrastructure as a Service


Infrastructure as a Service (IaaS) is a way of delivering Cloud Computing infrastructure – servers, storage, network and operating systems – as an on-demand service. Rather than purchasing servers, software, datacenter space or network equipment, clients instead buy those resources as a fully outsourced service on demand [23].
As we detailed in a previous whitepaper [24], within IaaS, there are some sub-categories that are worth noting. Generally IaaS can be obtained as public or private infrastructure or a combination of the two. “Public cloud” is considered infrastructure that consists of shared resources, deployed on a self-service basis over the Internet.
By contrast, “private cloud” is infrastructure that emulates some of Cloud Computing features, like virtualization, but does so on a private network. Additionally, some hosting providers are beginning to offer a combination of traditional dedicated hosting alongside public and/ or private cloud networks. This combination approach is generally called “Hybrid Cloud”.
Characteristics of IaaS
As with the two previous sections, SaaS and PaaS, IaaS is a rapidly developing field. That said there are some core characteristics which describe what IaaS is. IaaS is generally accepted to comply with the following;
• Resources are distributed as a service
• Allows for dynamic scaling
• Has a variable cost, utility pricing model
• Generally includes multiple users on a single piece of hardware
There are a plethora of IaaS providers out there from the largest Cloud players like Amazon Web Services [25] and Rackspace [26] to more boutique regional players.
As mentioned previously, the line between PaaS and IaaS is becoming more blurred as vendors introduce tools as part of IaaS that help with deployment including the ability to deploy multiple types of clouds [27].
Where IaaS Makes Sense
IaaS makes sense in a number of situations and these are closely related to the benefits that Cloud Computing bring. Situations that are particularly suitable for Cloud infrastructure include;
• Where demand is very volatile – any time there are significant spikes and troughs in terms of demand on the infrastructure
• For new organizations without the capital to invest in hardware
• Where the organization is growing rapidly and scaling hardware would be problematic
• Where there is pressure on the organization to limit capital expenditure and to move to operating expenditure
• For specific line of business, trial or temporary infrastructural needs
Where IaaS May Not be the Best Option
While IaaS provides massive advantages for situations where scalability and quick provisioning are beneficial, there are situations where its limitations may be problematic. Examples of situations where we would advise caution with regards IaaS include;
• Where regulatory compliance makes the offshoring or outsourcing of data storage and processing difficult
• Where the highest levels of performance are required, and on-premise or dedicated hosted infrastructure has the capacity to meet the organization’s needs

Case Study: Live Smart Helps Dieters by Taking an Infrastructure Diet


Live Smart Solutions is the parent company behind The Diet Solution Program, (insert endnote - http://www.thedietsolutionprogram.com) a company producing books and online diet programs. Beyond Diet [28] is an interactive community site for individuals on their diet program.
Started in 2008, the company has seen rapid growth including a 50x revenue jump in 2010. This translates to average daily site visits of 300,000 with spikes up to one million unique viewers. When deciding on a strategy for their infrastructure, Beyond Diet needed something that was both low-touch and highly scalable. It is important that Beyond Diet have the ability to both scale up and down as their marketing strategy sees large traffic spikes on a regular basis.
Rob Volk, CTO of Live Smart, reports that moving to Cloud infrastructure has given him more peace of mind. Formerly Live Smart had a part-time systems administrator working on their sites, and as Volk says,
It was not the best option for us. Now with Managed Cloud [an IaaS service offered by cloud computing provider Rackspace], Rackspace is basically acting as our Linux and Windows administrator. They’ll make our changes as we need them, and respond to any downtime, 24 hours a day. Within minutes, an engineer will log on to fix the problem.
The main drivers for Volk moving to Cloud were the ability to focus on core business and leave day-to-day management of infrastructure to the experts. The fact that Cloud providers offer multiple levels of redundancy, fast configuring and high degrees of flexibility were deciding factors. Interestingly, Volk never even considered running his own physical servers; rather the decision was one of either hosted servers or the Cloud.
The decision was made to go with Cloud because it provided reduced cost and higher flexibility than corresponding dedicated server options.
Volk is using multiple Cloud providers: he has three web servers, multiple database servers and a load balancer with Rackspace, while also using Amazon’s S3 service.
The biggest benefit Volk sees with Cloud infrastructure is scalability. As he explains,
After New Year’s, everyone goes on a diet. Our peak time is right after New Year’s: we might get three times the traffic from January to March. With Cloud Servers, we’re able to spin up new web front ends within a matter of minutes, then take them back down once traffic goes down. We have this elasticity in our farm that is only possible in a virtualized environment.

Conclusion


Cloud Computing is a term that doesn’t describe a single thing – rather it is a general term that sits over a variety of services from Infrastructure as a Service at the base, through Platform as a Service as a development tool and through to Software as a Service replacing on-premise applications.
For organizations looking to move to Cloud Computing, it is important to understand the different aspects of Cloud Computing and to assess their own situation and decide which types of solutions are appropriate for their unique needs.
Cloud Computing is a rapidly accelerating revolution within IT and will become the default method of IT delivery moving into the future – organizations would be advised to consider their approach towards beginning a move to the clouds sooner, rather than later.

Source: http://www.rackspace.com/knowledge_center/whitepaper/understanding-the-cloud-computing-stack-saas-paas-iaas

RHEL7 - Resize the logical volume

--> resize /data mount point from 4Gb to 3 GB.

1. Unmount the volume
# umount /data

2. fsck your volume
# e2fsck -f /dev/datavg/FS_data

3. Resize the Filesystem
# resize2fs /dev/datavg/FS_data 3G
# lvscan

4. Resize the volume
# lvreduce -L 3G /dev/datavg/FS_data
# mount -a

5. Verify the size
# df -h /data

RHEL 7 - Configure NTP client on your system


RHEL 7, you cna use chrony rather than standard ntp package

1. Install chrony
# yum install chrony

2. Configure chrony
# vi /etc/chrony.conf

server 192.168.10.110 iburst

3. Restart the service
# systemctl enable chronyd
# systemctl start chronyd


4. Verify the reach level
# chronyc sources -V
# chronyc tracking

RHEL - Autofs mount entry

Autofs mount

$ more /etc/auto.master
+auto.master

#/home   /etc/auto.home     --timeout 60
/-      /etc/auto.direct   --timeout 60


$ more /etc/auto.direct
# Mount hector for NFS shares
/repo_tmp      -rw,soft,intr,tcp,noatime   server1:/data/repo_tmp
/archive          -rw,soft,intr,tcp,noatime   server2:/data/archive
/backup          -rw,soft,intr,tcp,noatime   server3:/data/backup

RHEL7 - Crontab Example

1. Add cron entry for a root user
# crontab -e
23    22    *    *    *    /bin/echo    "Hello, World !!!"

2. Add cron entry for a user
# crontab -eu sarah
23    14    *    *    *    /bin/echo    "hyer"
3. To deny a user from running cron
# vi /etc/cron.deny
john

systemctl status crond
systemctl enable crond
# systemctl start crond

3. Login as a deny user and run the crontab -l, you should get permission deny error
# su - john
$ crontab -e