Monday, October 19, 2015

Subnet Guide: how many hosts you can have on a subnet?

Subnet Guide: how many hosts you can have on a subnet?
draft...

How to find how many hosts you can have on a subnet?

There are mainly two type of IP addresses. IPv4 and IPv6. IPv4 is 32 bit (4 octects 000.000.000.000; each octets has 8 bits totaling 32) and IPv6 is 128 bits. IPv4 has somewhere around 4billion address and they are running out. That is why, they created IPv6 which is 128 bit has trillions of ip addresses.

IP address classes
Class Range Decimal Range
A 1 – 126* 0
B 128 – 191 10
C 192 – 223 110
D 224 – 239 1110

Note: 0 and 127 are reserved


In subnetting, some bits are reserved for network part and some bits for host part. Here is an example of each class and their segregation of network and host part.


NNNNNNNN   .HHHHHHHH   .HHHHHHHH   .HHHHHHHH   Class A Address
NNNNNNNN   .NNNNNNNN   .HHHHHHHH   .HHHHHHHH   Class B Address
NNNNNNNN   .NNNNNNNN   .NNNNNNNN   .HHHHHHHH   Class C Address

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Lets see you have 32 bit subnet with all ones. Lets talk about 192.168.1.0/21

255.255.255.255  ==>> each section is 8 bit. on /24 network, first 3 sections are for network and the 4th one for host part.

In this case, you have 24 bit of network and 8 bit of host. Here you gave 3 bit for host part so host side has 10 (7+3) bit. so you can have 2 to the power of 10 -2 ( one for broadcast and one for router). So, on network side you have (24-3 =) 21 bit. So, your network is 192.168.x.0/21.

if you give 2 bits out of 24 bit from network side, you will have (7+2 =) 9 bit. So you will have 2 to the power of 9 -2 hosts (on host side) and on network side you will have 22 (24-2) bit. So your network is 192.168.x.0/22

notation                                                      resulting subnet
netmask                shorthand                                 number of addresses
255.255.255.0   /24 [8-bit]     28 =    256     = 254 hosts + 1 bcast + 1 net base
255.255.255.128 /25 [7-bit]     27 =    128     = 126 hosts + 1 bcast + 1 net base
255.255.255.192 /26 [6-bit]     26 =    64      = 62 hosts + 1 bcast + 1 net base
255.255.255.224 /27 [5-bit]     25 =    32      = 30 hosts + 1 bcast + 1 net base
255.255.255.240 /28 [4-bit]     24 =    16      = 14 hosts + 1 bcast + 1 net base
255.255.255.248 /29 [3-bit]     23 =    8       = 6 hosts + 1 bcast + 1 net base
255.255.255.252 /30 [2-bit]     22 =    4       = 2 hosts + 1 bcast + 1 net base
255.255.255.254 /31 [1-bit]     21 =    -       invalid (no possible hosts)
255.255.255.255 /32 [0-bit]     20 =    1       a host route (odd duck case)

Addresses       Hosts   Netmask Amount of a Class C
/30     4       2       255.255.255.252 1/64
/29     8       6       255.255.255.248 1/32
/28     16      14      255.255.255.240 1/16
/27     32      30      255.255.255.224 1/8
/26     64      62      255.255.255.192 1/4
/25     128     126     255.255.255.128 1/2
/24     256     254     255.255.255.0   1
/23     512     510     255.255.254.0   2
/22     1024    1022    255.255.252.0   4
/21     2048    2046    255.255.248.0   8
/20     4096    4094    255.255.240.0   16
/19     8192    8190    255.255.224.0   32
/18     16384   16382   255.255.192.0   64
/17     32768   32766   255.255.128.0   128
/16     65536   65534   255.255.0.0     256

http://www.digipro.com/Papers/IP_Subnetting.shtml
http://www.bassconsulting.com/ip_subnetting.htm
http://www.techrepublic.com/blog/data-center/ip-subnetting-made-easy-125343/
http://www.tcpipguide.com/free/t_IPSubnettingStep5DeterminingHostAddressesForEachSu.htm
http://www.cisco.com/c/en/us/support/docs/ip/routing-information-protocol-rip/13788-3.html
http://subnettingmadeeasy.blogspot.com/2007/11/subnetting-made-easy-lesson.html
https://srobb.net/subnet.html

more to add..

Saturday, January 5, 2013


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^66

Something to know about -

Hashing provides integrity for digital signatures and other data. A digital signature is a hash of the message encrypted with the sender’s private key

A digital signature is an encrypted hash of a message. The sender’s private
key encrypts the hash of the message to create the digital signature. The
recipient decrypts the hash with the sender’s public key. If successful, it
provides authentication, non-repudiation, and integrity. Authentication
identifies the sender. Integrity verifies the message has not been modified.
Non-repudiation prevents senders from later denying they sent an email.


The recipient’s public key encrypts when encrypting an email message and
the recipient uses the recipient’s private key to decrypt an encrypted email
message.



Time Offsets

Windows: 64-bit time stamp

- Number of 100-nanosecond intervals since
- January 1, 1601 00:00:00 GMT
- This stops working in 58,000 years

Unix: 32-bit time stamp
- Number of seconds since January 1, 1970 00:00:00 GMT
- This stops working on Tuesday, January 19, 2038 at 3:14:07 GMT


Two popular hashing algorithms used to verify integrity are MD5 and SHA.
HMAC verifies both the integrity and authenticity of a message with the use
of a shared secret. Other protocols such as IPsec and TLS use HMAC-MD5
and HMAC-SHA1.

IPsec must use HMAC for authentication and integrity. It can use either AES or 3DES for
encryption with ESP. When IPsec uses ESP, it encrypts the entire packet,
including the original IP header, and creates an additional IP header.



A VLAN, or virtual local-area network, was originally designed to decrease broadcast traffic on the data link layer. However, if implemented properly, it can also reduce the likelihood of having information compromised by network sniffers. It does both of these by compartmentalizing the network, usually by MAC address. This should not be confused with subnetting, which compartmentalizes the network by IP address on the network layer.

Banner grabbing is a technique used to find out information about web servers, FTP servers, and mail servers.  A VPN, or virtual private network, enables the secure connection of remote users to your network.
RADIUS authenticates users to a network and is sometimes used with a VPN.






Sec+ - Identifying Risk

It is not possible to eliminate risk, but you can take steps to manage it. An
organization can avoid a risk by not providing a service or not participating in
a risky activity. Insurance transfers the risk to another entity. You can
mitigate risk by implementing controls, but when the cost of the controls
exceeds the cost of the risk, an organization accepts the remaining, or
residual risk.

A risk assessment is a pointin-time assessment, or a snapshot. In other words, it assesses the risks based on current conditions, such as current threats, vulnerabilities, and existing controls.


Risk assessments use quantitative measurements or qualitative measurements. Quantitative
measurements use numbers, such as a monetary figure representing cost and asset values. Qualitative measurements use judgments.


Quantitative Risk Assessment

One quantitative model uses the following values to determine risks:
Single loss expectancy (SLE). The SLE is the cost of any single loss.
Annual rate of occurrence (ARO). The ARO indicates how many times the loss will occur
in a year. If the ARO is less than 1, the ARO is represented as a percentage. For example, if you anticipate the occurrence once every two years, the ARO is 50 percent or .5.
Annual loss expectancy (ALE). The ALE is the SLE × ARO.

Your company loss 1 laptop every month. one laptop costs $2000.
What is ALE?

Monthly Loss = 1
SLE = $2000
ARO = 1 Lap/month X 12 = 12
ALE = SLE X ARO
    = $2000 X 12 = $24000

If you buy locks, one time cost is $1000

If they steel 2 laptop a year after Lock, than

ARO = 2x$2000=$4000

Now total saving is $20,000 /yr

so spending 1000 on lock, total saving is $19,000

so it makes sense to purchase locks.

Managers use these two simple guidelines for most of these decisions:
- If the cost of the control is less than the savings, purchase it.
- If the cost of the control is greater than the savings, accept the risk.

If you plan to implement other methos such as biometrix and the total cost to implement is $30,000, it does not make sense to spend $6K more per year. But you can think of the sensitvity of data as welll. Analyze all factors.

Note:
A quantitative risk assessment uses specific monetary amounts to identify
cost and asset values. The SLE identifies the amount of each loss, the ARO
identifies the number of failures in a year, and the ALE identifies the
expected annual loss. You calculate the ALE as SLE × ARO. A qualitative
risk assessment uses judgment to categorize risks based on probability and
impact.


Qualitative Risk Assessment

You can think of quantitative as using a quantity or a number, whereas qualitative is related to quality, which is often a matter of judgment.



You need to calculate the ALE for a server. The value of the server is $3,000, but it has crashed 10 times in the past year. Each time it crashed, it resulted in a 10 percent loss. What is the ALE?

The annual loss expectancy (ALE) is $3,000. It is calculated as single loss expectancy (SLE) × annual rate of occurrence (ARO). The SLE is 10 percent of $3,000 ($300) and the ARO is 10. 10 × $300 is $3,000.




^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Your organization hosts a web site within a DMZ and the web site accesses a database server in
the internal network. ACLs on firewalls prevent any connections to the database server except from the web server. Database fields holding customer data are encrypted and all data in transit between the web site server and the database server are encrypted. The GREATEST risk to the data on the server is A SQL injection attack which allows an attacker to send commands to the database server to access data. Encryption protects it on the server and in transit, but the web server can decrypt it. 

Saturday, October 17, 2015

A one-time pad

Encrypting with a one-time pad is a very strong encryption technique. In this video, I’ll demonstrate how you can use a one-time pad to encrypt your data.


https://www.youtube.com/watch?v=0AagcCHlNMY


A one-time pad is a cipher that was created in the early 1900s, and it was built when teletype machines were first becoming popular as a way to encrypt the communication on teletype. So this was all done on pieces of paper that would go into a teletype and pieces of paper that would come out on the other side. It was an automated system. It was one that really had a very interesting effect on the communications because then, you could really have private messages go back and forth between one place and another. And it really worked on this concept of the pad, and if you think of the pad as a pad of paper, that’s really what this ended up looking like is a single pad of paper with a key imprinted upon it.
This was really interesting in that it wasn’t complicated, there wasn’t a lot of mathematics involved, and it was one that was also very, very secure. When used properly, a one-time pad is one of these unbreakable kind of ciphers, and as we get into understanding more about the one-time pad, you’ll start to understand why it would be so difficult to break this type of communication. For the one-time pad to be this secure, there were a few rules we had to keep in mind. The first one is the key, the piece of information that is on our pad of paper, needs to be the same size as the plain text that we need to encrypt, so the number of letters in the key and the number of letters in the message you’re sending are exactly the same. Just keep that in mind.
The second rule is that the key is really completely randomized. This is not a pseudo-random or some type of a very static computer function that’s creating this. It really is what we call a true random set of characters that we’re putting on there, or a set of numbers. A one-time pad can be used in many different ways. The key should only be used one time, and that’s one of the nice things about having this on a piece of paper. We use the key. We encrypt with it. On the other side, we decrypt with it, and then, we throw away the key. And you pull off that piece of paper on the pad, you burn it, you get rid of it, and there’s obviously another key you would need to use next time.
That’s one of the important parts of this is every time you send a message, the key is going to change, thereby making the entire communication very, very difficult to decrypt. Even if you were able to crack the key one time, you would not be able to crack it again because now, the key is completely different. There are, hopefully, only going to be two copies of this key, one on the person who is sending the message, one the person who is receiving the message, and those are the only two people who would ever have a copy of this key. If somebody was to get a copy of the key somewhere in the middle, they would absolutely be able to decrypt this. So if you follow these rules, you can be assured that your one-time pad communication is not going to be seen by anyone else.
The process of encrypting with a one-time pad is relatively simple. We’re going to step through it right here. Obviously, we would follow these same steps in reverse to decrypt the information. The first thing we want to do is put our entire alphabet down, and we’re going to assign every letter a number. The easy way is to start at zero with A and end up at 25 with the letter Z. That will be– at least the numbers, we’ll be able to use to perform our calculations.
Now, let’s take a message. Let’s take something in plain text like the word “hello,” and we would like to encrypt this. But to encrypt it, we’re also going to need a key, and as you recall, we need a key that’s exactly the same size as the plain text. So if we go to our one-time pad and we look at our key, we see that our key, in this case, X, M, C, K, L, a random set of letters. Obviously, this key will change every time we send a message. So we could send the word “hello” this time. The next time we send the word “hello,” it’s going to be completely different in the cipher text that we look at because your key is going to be different every time.
Well, we can’t calculate or perform any type of mathematics on letters, so we need to convert these to numbers. And of course, we have our conversion chart right here at the top. So let’s convert “hello” into a series of numbers, 7, 4, 11, 11, 14. And let’s take the same thing with our key and convert that, 23, 12, 2, 10, 11.
Now, we’ve got two numbers, and we’re just going to add them together, and if we add 7 and 23, well, we kind of go off the end here to 25. If you hit 25, you go all the way back to zero and start counting up again. So 7 plus 23 happens to be the number 4. We’re going to associate this with a letter in a moment.
So if you add all of these columns up, you get 4, 16, 13, 21 and 15, and if you, then, convert those back to letters, you get E, Q, N, V, Z. So there’s our encrypted message. The idea is, on the other end, someone will have the exact same key that we have. They’ll take our message, simply subtract the numbers from it to come up with the plain text numbers, and then associate those back with the letters H, E, L, L, O, to get the message “hello.”

Source: http://www.professormesser.com/security-plus/sy0-401/one-time-pads-2/

Sunday, October 11, 2015

Redundancy, Fault Tolerance, and High Availability

Whenever we think of keeping all of our systems up and running in an environment, we very often think about what can happen if we lose a server, if we lose a router, if we lose another component within our devices. So we have to think about redundancy and fault tolerance. These are very similar ideas, redundancy and fault tolerance. The idea is to keep things up and running and maintain uptime. We want to be sure that all of the systems, all of the things on our network– that we’re able to use all of the resources available to us and our company continues to function the way it should.
So we need to make sure, for instance, that we don’t have a hardware failure. We may want to have redundant servers. Or within a single server, we may want to have redundant power supplies. And so by keeping those redundancies of those systems, if we happen to lose a power supply or we happen to lose a motherboard in a server, we’ve got another one sitting right there, ready to take its place so that we can keep things up and running.
We also need to think about the software that we’re running on these systems. We may want to get software that’s able to notify us whenever there’s a problem, or work in conjunction with other pieces of software that might be running, perhaps in a cluster, so that if one particular piece of software fails, you’ve got other pieces of software running on the same network that are able to pick up the slack should that problem occur.
And we also want to be sure we don’t have any major system problems. Maybe we would like to have redundant routers. Maybe we’d like redundant firewalls. Maybe we would like redundant wide-area network links to the internet. You can obviously really apply different types of redundancy and fault tolerance to many environments. So by having these extra systems in place, we can always be assured that our systems will be available and up and running 100% of the time.
Now just because you have multiple servers or multiple systems– you’ve got that redundancy– doesn’t necessarily mean that your environment is highly available. High availability means that the systems will always be available regardless of what happens. With redundancy, you may have to flip a switch to move from one server to the other, or you may have to power up a new system to be able to have that system available. High availability is generally considered to be always on, always available.
If you have multiple high availability systems and you lose one, it doesn’t matter. Everybody continues to run because you’ve got an extra system ready to take up the extra slack, the extra load associated with that resource. There may be many different components working together to have this happen.
You may have extra and multiple wide-area network connections with multiple routers, with multiple firewalls, with multiple switches going to multiple servers, and they’re all working together and in conjunction. Each one of those sections would be set up the high have high availability so that if any particular one of those failed, all of the other components can work together to keep the resources up and running in your organization.
Now redundancy and fault tolerance means that we’re going to need to have redundant hardware components. So you can already think about having multiple power supplies, maybe having multiple devices available for us to use. We might also want to have multiple disks. Within a single server, in fact, you can have something called RAID, which is a Redundant Array of Independent Disks. And this RAID methodology means that if we lose one disk, we have options to keep the system up and running without anybody ever knowing that there was a problem with that piece of hardware.
Another piece of hardware we may want to have– because we’re never quite certain how power is going to be in our environment– is something called on uninterruptable power supply. You’ll hear this referred to as a UPS. If we ever lose power, these UPS systems have inside of them the batteries and some other method to keep things up and running.
And those UPS systems can be extremely valuable, especially if you’re in an environment where power is always a little sketchy. You may be in the southern United States during the summer where there are a lot of thunderstorms. Power goes on and off all the time. You almost require a UPS on your system to make sure things are available to you.
If you want to be sure that resources running on a server are available, you may want to consider clustering a number of servers together. That way if you lose a motherboard, if a system becomes unplugged. If you have a system piece of software in a system fail, you can have these extra systems in your cluster to keep everything up and running. And since all of those cluster machines are all talking to each other, they know if there’s an outage and they’ll be able to take those resources and make sure that everybody is able to run all of the systems that they need to run.
You often see systems very, very often load balancing these things. It’s very important. If you have multiple systems in place, you want to have all of them running all the time so that you’re balancing the load between them. And if you lose one, everybody will flip over to the other. Because the load is being balanced, you’ll want to make sure that you have additional resources available on that original machine so that it’s able to keep up with the load. It’s a lot like having multiple engines on a plane. If you lose one engine, you know that extra engine on the plane is designed to be able to keep that plane in the air until you’re able to get it down on the ground safely.
I mentioned that Redundant Array of Independent Disks that you might have inside of a single server. There’s different types of RAID out there. This chart shows you an idea of the primary kinds that you’ll run into. RAID 0 for instance, is a method called striping without parity. What that means is you have multiple disks, and parts of the files are copied to those multiple disks, but only part of the file, which means that we’re able to have very high performance because we’re writing tiny pieces to many different files at the same time. The problem is, there’s no parity, which means if we lose any one of those disks, the entire system is unavailable to us. So there’s no fault tolerance associated with that at all.
Another RAID type is RAID 1, or mirroring, where we are exactly duplicating this information across multiple disks. So if I have a 2 terabyte disk, I’ll have a duplicate 2 terabyte disk that has exactly the same information on it. If I lose the first disk, it continues to run, because now we’re fault tolerant. I can use the exact copy of that disk in RAID 1.
RAID 5 is very similar to RAID 0. It is striping, but it includes an extra drive for parity data, which means I’m not getting an exact duplicate of the data, but if I lose any of those drives, I still have a way to fault tolerantly retrieve all of that data from the disks. This is a pretty advanced system to be able do something like that, but it means that if I lose any physical drive, I’m still up and running. And I’m not using the exact duplicate amount of data that I have in RAID 1. So we’ve got some efficiencies there in the amount of storage in our systems.
Occasionally, you’ll see these RAID systems combined with other systems. You might have striping without parity, but you’ll mirror that striping. Or you’ll mirror the data and have it striped to a parity disk or striped to a non-parity system. So you’ve got different options where you can combine these things together. So you often see RAID 0 plus 1 or RAID 5 plus 1, where you are doing striping with parity and mirroring all at the same time. A lot of flexibility there, and if you’re building these file systems in your servers, you’ll want to check and see what RAID options might be available for you.
I mentioned server clustering. That’s a really useful way to keep systems up and running, and to provide availability 100% of the time. In an active/active server cluster, all of your end users are out here accessing different servers in your environment. And these servers are always active with each other. They’re constantly communicating between each other so that the two systems know if they’re available and running. And then you have behind-the-scenes storage that both of these systems will share. The idea is that if you lose one section of this cluster, everybody can still go right to the other active side of the cluster to be able to use those resources.
An active/passive cluster is a little bit different. In active/passive, you have one system that is always active and one system that is always passive. The passive system is sitting there and doing nothing. It is waiting for a problem to occur. These clusters are always talking to each other and making sure they’re up and running. And if Node 2 notices that Node 1 has disappeared, that the active system is no longer there, it automatically makes itself available to the world. And now all of the clients begin using the backup or the passive system to be able to perform whatever function they need across this network.
Active/passive systems are generally much easier to implement, because these are exactly the same type of systems. Active/active tends to be a little bit more complex to implement, because you now have multiple systems talking to multiple servers simultaneously. There has to be a way to keep track of that and make sure that everybody’s talking to the right systems at one time. But whether you’re using active/active or active/passive, you have systems that are redundant and available should there be any problems on your network.
If you’re planning to have redundant systems, you may not have them all running the same way. You may have cold spares, which means you’ve bought an additional server, but you’re keeping it in a box in a storeroom somewhere. You may have 10 servers sitting in the rack, and if any of those 10 servers fails, you can go to the storeroom, pull your one spare out of there– your cold spare– put that in the rack. And then of course, you have to configure it, because this is a fresh configuration.
You may want to have something called a warm spare, which means that spare is something that you might have even put into the rack. You’ll occasionally have it turned on. You may have it updated with the latest software, updated with your configurations. That way if you do have a problem, you simply flip a switch, turn it on, or plug it in. And now that warm spare is ready to go. You don’t have to now perform any additional configurations or load any additional software to get that running.
And obviously, your last option is a hot spare. It’s always on. It’s always updated. In many cases, it’s designed to automatically take over should there be a problem. So if you do have a problem with the system, it goes down, you can immediately move to the hot spare and it has an exact duplicate, an exact updated system that everybody can now use to perform the function that they need on your network.

source: http://www.professormesser.com/security-plus/sy0-401/redundancy-fault-tolerance-and-high-availability-2/

LDAP - info



LDAP is based on an earlier version of X.500. Windows Active Directory domains and Unix realms use LDAP to identify objects in query strings with codes such as CN=Users and DC=example. Secure LDAP encrypts transmissions with SSL or TLS.


Administrators often use LDAP in scripts, but they need to have a basic understanding of how to identify objects. For example, a user named Homer in the Users container within the example.com domain is identified with the following LDAP string:
LDAP://CN=Homer,CN=Users,DC=example,DC=com
CN=Homer. CN is short for common name.
CN=Users. CN is sometimes referred to as container in this context.
DC=example. DC is short for domain component.
DC=com. This is the second domain component in the domain name.




Wednesday, October 7, 2015

Disk Space Size - Data Size


Big Data is really big. It’s so big that analysts are currently trying to figure out what to name the next iteration. Currently, yottabyte is the largest name and it refers to 1,000 zetabytes.

For the record, the order is gigabyte, terabyte, petabyte, exabyte, zettabyte, and then yottabyte. The next name might be hellabyte, using northern California slang of “hella” meaning “a lot.” Seriously. This is one of the names proposed to the International System of Units.

Hexadecimal Conversion

Hexadecimal Conversion

0   1   2   3   4   5  6   7   8   9   10   11   12   13   14   15
0   1   2   3   4   5  6   7   8   9    A    B     C     D      E     F

or

0 -->> 0
1 -->>  1
2  -->> 2
3  -->> 3
4  -->> 4
5  -->> 5
6  -->> 6
7  -->> 7
8  -->> 8
9  -->> 9
10  -->> A
11  -->> B
12  -->> C
13  -->> D
14  -->> E
15  -->> F


The OSI Model

The OSI Model
Layer  7  -­‐  Application The  layer  we  see  -­‐  Google  Mail,  Twiter,  Facebook
Layer  6  -­‐  Presentation Encoding  and  encryption  (SSL/TLS)
Layer  5  -­‐  Session Communication  between  devices  (Control  protocols,  tunneling  protocols)
Layer  4  -­‐  Transport The  “post  office”  layer  (TCP  segment,  UDP  datagram)
Layer  3  -­‐  Network The  routng  layer  (IP  address,  router,  packet)
Layer  2  -­‐  Data  Link The  switching  layer  (Frame,  MAC  address,  EUI-­‐48,  EUI-­‐64,  Switch)
Layer  1  -­‐  Physical Signaling,  cabling,  connectors  (Cable,  NIC,  Hub)

OSI Mnemonics 
• Please Do Not Trust Sales Person’s Answers
• All People Seem To Need Data Processing
• Please Do Not Throw Sausage Pizza Away!

Source: Professor M

Troubleshooting Process

Troubleshooting Process

• Identify the problem
• Information gathering, identify symptoms, question users
• Establish a theory of probable cause
• Test the theory to determine cause
• Establish a plan of action to resolve the problem and identify potential effects
• Implement the solution or escalate as necessary
• Verify full system functionality and, if applicable, implement preventative measures
• Document findings, actions and outcomes

Source: Professor M

DNS Resolution Process


Domain Name System (DNS)
DNS resolves host names to IP addresses. This eliminates the need for you and me to have to remember the IP address for web sites. Instead, we simply type the name into the browser, and it connects. For example, if you type in google.com as the Uniform Resource Locator (URL) in your web browser, your system queries a DNS server for the IP address. DNS responds with the correct IP address and your system connects to the web site using the IP address.

DNS also provides reverse lookups. In a reverse lookup, a client sends an IP address to a DNS
server with a request to resolve it to a name. Some applications use this as a rudimentary security mechanism to detect spoofing. For example, an attacker may try to spoof the computer’s identity by using a different name during a session. However, the Transmission Control Protocol/Internet Protocol (TCP/IP) packets in the session include the IP address of the masquerading system and a reverse lookup shows the system’s actual name. If the names are different, it shows suspicious activity. Reverse lookups are not 100 percent reliable because reverse lookup records are optional on DNS servers. However, they are useful when they’re available.

Two attacks against DNS services are DNS poisoning and pharming.

DNS Resolution Process
1 - Request sent to local name server
2 - Name server queries root server
3 - Root response sent to local name server
4 - Name server queries .com name server
5 - .com Response sent to local name server
6 - Name server queries specific domain server
7 - Domain server responds to name server
8 - Name server provides result to local device
9 - Answer is cached locally

DNS Records
• A and AAAA - Address
• CNAME - Canonical name
• MX - Mail exchanger
• NS - Name server
• PTR - Pointer

Source Processor Messer

Tuesday, October 6, 2015

Solaris 10 - CPU/Memory Allocation

How to add memory on LDOM

1. Login to your LDOM where you will be adding extra ram and find out the control domain.
# virtinfo -a
Control domain: "Name_of_Control_Domain"

2. Find the memory currently assigned to your system.
# prtdiag -v | grep -i mem
Memory size: 1234 Megabytes

3. Now, Login to your control domain and check if you have enough resource available to assign to LDOM

a. Check the memory allocation on all LDOM
# ldm list | awk '{print $1 "\t" $6}'
or
to see detail allocation
# ldm ls -o memory
# ldm ls -o cpu
# ldm ls -l <MY_LDOM>

b. Check available memory that you can assign to LDOM,
# ldm list-devices memory
Memory
PA SIZE
0x13c0000000 16G
This output shows there is 16GB available space. If no value returned, that mean you don't have any memory available.


5. Now, assign 16GB of memory to LDOM
# ldm list
a. Set memory to 32GB
# ldm set-memory 32G <MY_LDOM>

b. Add 16 GB of ram
# ldm add-memory 16G <MY_LDOM>

c. Remove 16GB of ram
# ldm remove-memory 16G <MY_LDOM>

6. Now, go back to your LDOM and verify the added memory
# prtconf | grep -i mem

---------------------------------------

To work on CPU/Processor

1. Login to your LDOM and check the no of cpu assigned to the system.
# psrinfo | wc -l

2. Login to Control Domain and check the available CPU
a. Check available cpu,
# ldm list-devices cpu
# ldm list-devices vcpu | wc -l
You will see number of vcpus assigned.
VCPU - no of cpu display is a thread acts like a vpu
Note: Most systems (t4/5, not all) come with 1 core = 8 thread
Some place, they assign by core and some place its by thread.

b. To set to the specific no. of CPU
# ldm set-vcpu 32 <MY_LDOM>

c. To add 16 CPU
# ldm add-vcpu 16 <MY_LDOM>

d. To remove CPU
# ldm remove-vcpu 16 <MY_PDOM>


Some folks prefer to do this way,
1. Shutdown the LDOM
- Login to control domain, and login to LDOM through console
# ldm list
# telnet 0 5000
# init 0
{ok}

2. Allocate cpu/mem from Control domain
# ldm set-vcpu 16 <MY-LDOM>
# ldm set-memory 32G <MY-LDOM>

3. Start LDOM and boot the system
# ldm start <MY_LDOM>
# telnet 0 5000
{ok} boot


Note: If you are using cores rather than thread (vcpu) then you can do this way as well.
a. First check the LDOM config
# ldm list-bindings <my_ldom> | more
Look at the value under CONSTRAINT, you will see something like this, cpu=whole core, then you can use core. Most environment 1 core = 8 vcpu.
    cpu=whole-core
    max-cores-unlimited


# ldm remove-core 2 <my_ldom>
# ldm add-core 2 <my_ldom>


Resource capping
-----------------
When you reduce the memory, I would suggest you to reboot the system if memory utilization is high. Say you have 32 GB of ram and you see 20 GB used and they want you to reduce by 16 than you would probably want to cap the memory. Most application run under specific user, find the user and cap the memory.

Whatever way you want to do, just add entry to /etc/projects. Lets say, you want to cap (10GB) to a user called wauser

10GB = 10x1024x1024x1024 = 10737418240 byte

# vi /etc/project
user.wauser:100::wauser::project.max-shm-memory=(privileged,8589934592,deny);rcap.max-rss=10737418240


Set zfs arc size
-----------------

vi /etc/system

* zfs arc setting 2g max, 64mb min
set zfs:zfs_arc_max=2684354560
set zfs:zfs_arcmin=67108864
* set max file descriptors
set rlim_fd_max=65536
wq!




A swap (or paging file) is an extension of RAM, but it is stored on the hard drive. The swap
file is rebuilt each time the system is rebooted



and reboot your system.

Sunday, October 4, 2015

DMZ

a DMZ is a buffered zone between an internal network and the Internet. The DMZ provides a layer of protection for Internet-facing servers, but servers in the DMZ are available
on the Internet.

Public Key Infrastructure (PKI)

Exploring PKI Components
A Public Key Infrastructure (PKI) is a group of technologies used to request, create, manage,
store, distribute, and revoke digital certificates. Asymmetric encryption depends on the use of
certificates for a variety of purposes, such as protecting email and protecting Internet traffic with SSL
and TLS. For example, HTTPS sessions protect Internet credit card transactions, and these
transactions depend on a PKI.

A primary benefit of a PKI is that it allows two people or entities to communicate securely
without knowing each other previously. In other words, it allows them to communicate securely
through an insecure public medium such as the Internet.

For example, you can establish a secure session with Amazon.com even if you’ve never done so
before. Amazon purchased a certificate from VeriSign. The certificate provides the ability to establish a secure session.
A key element in a PKI is a Certificate Authority.

Certificate Authority
A Certificate Authority (CA, pronounced “cah”) issues, manages, validates, and revokes
certificates. In some contexts, you might see a CA referred to as a certification authority, but they are
the same thing. CAs can be very large, such as VeriSign, which is a public CA. A CA can also be very small, such as a single service running on a server in a domain.

Public CAs make money by selling certificates. For this to work, the public CA must be trusted.
Certificates issued by the CA are trusted as long as the CA is trusted.

This is similar to how a driver’s license is trusted. The Department of Motor Vehicles (DMV)
issues driver’s licenses after validating a person’s identity. If you want to cash a check, you may
present your driver’s license to prove your identity. Businesses trust the DMV, so they trust the
driver’s license.

Although we may trust the DMV, why would a computer trust a CA? The answer is based on the
certificate trust path.

How Digital signature works?

Digital signature process
Lisa creates her message in an email program, such as Microsoft Outlook. Once Microsoft
Outlook is configured, all she has to do is click a button to digitally sign the message. Here is what
happens when she clicks the button:
1. The application hashes the message.
2. The application retrieves Lisa’s private key and encrypts the hash using this private key.
3. The application sends both the encrypted hash and the unencrypted message to Bart.
When Bart’s system receives the message, it verifies the digital signature using the following
steps:
1. Bart’s system retrieves Lisa’s public key, which is in Lisa’s public certificate. In some
situations, Lisa may have sent Bart a copy of her certificate with her public key. In domain
environments, Bart’s system can automatically retrieve Lisa’s certificate from a network
location.
2. The email application on Bart’s system decrypts the encrypted hash with Lisa’s public key.
3. The application calculates the hash on the received message.
4. The application compares the decrypted hash with the calculated hash.
If the calculated hash of the received message is the same as the encrypted hash of the digital
signature, it validates several important checks:

Authentication. Lisa sent the message. The public key can only decrypt something encrypted
with the private key, and only Lisa has the private key. If the decryption succeeded, Lisa’s
private key must have encrypted the hash. On the other hand, if another key was used to
encrypt the hash, Lisa’s public key could not decrypt it. In this case, Bart will see an error
indicating a problem with the digital signature.

Non-repudiation. Lisa cannot later deny sending the message. Only Lisa has her private key
and if the public key decrypted the hash, the hash must have been encrypted with her private
key. Non-repudiation is valuable in online transactions.

Integrity. Because the hash of the sent message matches the hash of the received message, the
message has maintained integrity. It hasn’t been modified.

A digital signature is an encrypted hash of a message. The sender’s private key encrypts the hash of the message to create the digital signature. The recipient decrypts the hash with the sender’s public key. If successful, it provides authentication, non-repudiation, and integrity. Authentication identifies the sender. Integrity verifies the message has not been modified. Non-repudiation prevents senders from later denying they sent an email.

Source: Darril Gobson

Saturday, October 3, 2015

Fire categories

The different components of a fire are heat, oxygen, fuel, and a chain reaction creating the fire.
Fire suppression methods attempt to remove or disrupt one of these elements to extinguish a fire. You
can extinguish a fire using one of these methods:

Remove the heat. Fire extinguishers commonly use chemical agents or water to remove the
heat. However, water should never be used on an electrical fire.

Remove the oxygen. Many methods use a gas, such as carbon dioxide (CO2) to displace the
oxygen. This is a common method of fighting electrical fires because CO2 and similar gasses
are harmless to electrical equipment.

Remove the fuel. Fire-suppression methods don’t typically fight a fire this way, but of
course, the fire will go out once all the material is burned.

Disrupt the chain reaction. Some chemicals can disrupt the chain reaction of fires to stop
them.

The class of fire often determines what element of the fire you will try to remove or disrupt.
Within the United States, fires are categorized in one of the following fire classes:
Class A—Ordinary combustibles. These include wood, paper, cloth, rubber, trash, and
plastics.

Class B—Flammable liquids. These include gasoline, propane, solvents, oil, paint, lacquers,
and other synthetics or oil-based products.

Class C—Electrical equipment. This includes computers, wiring, controls, motors, and
appliances. On computer-centric environment, you should especially understand that a Class C fire is from electrical equipment. You should not fight Class C fires with water or water-based materials, such as foam, because the water is conductive and can pose significant risks to personnel.

Class D—Combustible metals. This includes metals such as magnesium, lithium, titanium,
and sodium. Once they start to burn, they are much more difficult to extinguish than other
materials.

You can extinguish a Class A fire with water to remove the heat. However, water makes things
much worse if you use it on any of the other classes. For example, using water on live equipment
actually poses a risk because electricity can travel up the water stream and shock you. Additionally,
water damages electrical equipment.

Comparing Backup Types - Back up and Restore

Comparing Backup Types

Backup utilities support several different types of backups. Even though third-party backup
programs can be quite sophisticated in what they do and how they do it, you should have a solid
understanding of the basics.
The most common media used for backups is tape. Tapes store more data and are cheaper than
other media, though some organizations use hard disk drives for backups. However, the type of media doesn’t affect the backup type.

The following backup types are the most common:
Full backup. A full (or normal backup) backs up all the selected data.
Differential backup. This backs up all the data that has changed or is different since the last
full backup.
Incremental backup. This backs up all the data that has changed since the last full or incremental backup.

Full Backups
A full backup backs up all data specified in the backup. For example, you could have several
folders on the D: drive. If you specify these folders in the backup program, the backup program backs
up all the data in these folders.
Although it’s possible to do a full backup on a daily basis, it’s rare to do so in most production
requirements. This is because of two limiting factors:

Time. A full backup can take several hours to complete and can interfere with operations.
However, administrators don’t always have unlimited time to do backups and other system
maintenance. For example, if a system is online 24/7, administrators may need to limit the
amount of time for full backups to early Sunday morning to minimize the impact on users.
Money. Backups need to be stored on some type of media, such as tape or hard drives.
Performing full backups every day requires more media, and the cost can be prohibitive.
Instead, organizations often combine full backups with differential or incremental backups.
However, every backup strategy must start with a full backup.

Restoring a Full Backup
A full backup is the easiest and quickest to restore. You only need to restore the single full
backup and you’re done. If you store backups on tapes, you only need to restore a single tape.
However, most organizations need to balance time and money and use either a full/differential or a
full/incremental backup strategy.

Differential Backups
A differential backup strategy starts with a full backup. After the full backup, differential
backups back up data that has changed or is different since the last full backup.
For example, a full/differential strategy could start with a full backup on Sunday night. On
Monday night, a differential backup would back up all files that changed since the last full backup on
Sunday. On Tuesday night, the differential backup would again back up all the files that changed since the last full backup. This repeats until Sunday, when another full backup starts the process again. As the week progresses, the differential backup steadily grows in size.

Restoring a Full/Differential Backup Set
Assume for a moment that each of the backups was stored on different tapes. If the system
crashed on Wednesday morning, how many tapes would you need to recover the data?
The answer is two. You would first recover the full backup from Sunday. Because the
differential backup on Tuesday night includes all the files that changed after the last full backup, you
would restore that tape to restore all the changes up to Tuesday night.

Incremental Backups
An incremental backup strategy also starts with a full backup. After the full backup, incremental
backups then back up data that has changed since the last backup. This includes either the last full
backup, or the last incremental backup.

As an example, a full/incremental strategy could start with a full backup on Sunday night. On
Monday night, an incremental backup would back up all the files that changed since the last full
backup. On Tuesday night, the incremental backup would back up all the files that changed since the
incremental backup on Monday night. Similarly, the Wednesday night backup would back up all files
that changed since the last incremental backup on Tuesday night. This repeats until Sunday when
another full backup starts the process again. As the week progresses, the incremental backups stay
about the same size.

Restoring a Full/Incremental Backup Set
Assume for a moment that each of the backups was stored on a different tape. If the system
crashed on Thursday morning, how many tapes would you need to recover the data?
The answer is four. You would first need to recover the full backup from Sunday. Because the
incremental backups would be backing up different data each day of the week, each of the incremental backups must be restored and in the chronological order.
Sometimes, people mistakenly think the last incremental backup would have all the relevant
data. Although it might have some relevant data, it doesn’t have everything.

As an example, imagine you worked on a single project file each day of the week, and the system
crashed on Thursday morning. In this scenario, the last incremental backup would hold the most
recent copy of this file. However, what if you compiled a report every Monday but didn’t touch it
again until the following Monday? Only the incremental backup from Monday would include the most recent copy. An incremental backup from Wednesday night or another day of the week wouldn’t include the report.

Choosing Full/Incremental or Full/Differential
A logical question is, “Why are there so many choices for backups?” The answer is that different
organizations have different needs.
For example, imagine two organizations perform daily backups to minimize losses. They each
do a full backup on Sunday, but are now trying to determine if they should use a full/incremental or a
full/differential strategy.
The first organization doesn’t have much time to perform maintenance throughout the week. In
this case, the backup administrator needs to minimize the amount of time required to complete
backups during the week. An incremental backup only backs up the data that has changed since the
last backup. In other words, it includes changes only from a single day. In contrast, a differential
backup includes all the changes since the last full backup. Backing up the changes from a single day
takes less time than backing up changes from multiple days, so a full/incremental backup is the best
choice.
In the second organization, recovery of failed systems is more important. If a failure requires
restoring data, they want to minimize the amount of time needed to restore the data. A full/differential is the best choice in this situation because it only requires the restoration of two backups, the full and the most recent differential backup. In contrast, a full/incremental can require the restoration of several different backups, depending on when the failure occurs.

Remember this
If you have unlimited time and money, the full backup alone provides the
fastest recovery time. Full/incremental strategies reduce the amount of time
needed to perform backups. Full/differential strategies reduce the amount of
time needed to restore backups.

Testing Backups
I’ve heard many horror stories in which personnel are regularly performing backups thinking all
is well. Ultimately, something happens and they need to restore some data. Unfortunately, they
discover that none of the backups holds valid data. People have been going through the motions, but
something in the process is flawed.
The only way to validate a backup is to perform a test restore. Performing a test restore is
nothing more than restoring the data from a backup and verifying its integrity. If you want to verify that
you can restore the entire backup, you perform a full restore of the backup. If you want to verify that
you can restore individual files, you perform a test restore of individual files. It’s common to restore
data to a different location other than the original source location, but in such a way that you can
validate the data.
As a simple example, an administrator can retrieve a random backup and attempt to restore it.
There are two possible outcomes of this test, and both are good:
The test succeeds. Excellent! You know that the backup process works. You don’t
necessarily know that every backup tape is valid, but at least you know that the process is
sound and at least some of your backups work.
The test fails. Excellent! You know there’s a problem that you can fix before a crisis. If you
discovered the problem after you actually lost data, it wouldn’t help you restore the data.
An additional benefit of performing regular test restores is that it allows administrators to
become familiar with the process. The first time they do a restore shouldn’t be in the middle of a
crisis with several high-level managers peering over their shoulders.

Source:- Darril Gibson

Load Balancers for High Availability

Load Balancers for High Availability


Load balancing spreads the processing load over multiple servers to ensure availability when
the processing load increases. Many web-based applications use load balancing for higher
availability

A load balancer can optimize and distribute data loads across multiple computers or multiple
networks. For example, if an organization hosts a popular web site, it can use multiple servers hosting
the same web site in a web farm. Load-balancing software distributes traffic equally among all the
servers in the web farm.

The term load balancer makes it sound like it’s a piece of hardware, but a load balancer can be
hardware or software. A hardware-based load balancer accepts traffic and directs it to servers based
on factors such as processor utilization and the number of current connections to the server. A
software-based load balancer uses software running on each of the servers in the load-balanced
cluster to balance the load.

Load balancing primarily provides scalability, but it also contributes to high availability.
Scalability refers to the ability of a service to serve more clients without any decrease in
performance. Availability ensures that systems are up and operational when needed. By spreading the
load among multiple systems, it ensures that individual systems are not overloaded, increasing
overall availability.

Consider a web server that can serve 100 clients per minute, but if more than 100 clients connect
at a time, performance degrades. You need to either scale up or scale out to serve more clients. You
scale the server up by adding additional resources, such as processors and memory, and you scale out
by adding additional servers in a load balancer.

Figure 9.2 shows an example of a load balancer with multiple web servers. Each web server
includes the same web application. Some load balancers simply send new clients to the servers in a
round-robin fashion. The load balancer sends the first client to Server 1, the second client to Server
2, and so on. Other load balancers automatically detect the load on individual servers and send new
clients to the least used server.

Figure 9.2: Load balancing

An added benefit of many load balancers is that they can detect when a server fails. If a server
stops responding, the load-balancing software no longer sends clients to this server. This contributes
to overall high availability for the load balancer.

When servers are load balanced, it’s called a load-balanced cluster, but it is not the same as a
failover cluster. A failover cluster provides high availability by ensuring another node can pick up the
load for a failed node. A load-balanced cluster provides high availability by sharing the load among
multiple servers. When systems must share the same data storage, a failover cluster is appropriate.
However, when the systems don’t need to share the same storage, a load-balancing solution is more
appropriate, and less expensive. Also, it’s relatively easy to add additional servers to a loadbalancing
solution.

Remember this
Failover clusters are one method of server redundancy and they provide high
availability for servers. They can remove a server as a single point of failure.
Load balancing increases the overall processing power of a service by
sharing the load among multiple servers. Load balancers also ensure
availability when a service has an increased number of requests.

Source: Darril Gibson Book Sec +

Server Redundancy - Clustering info

Server Redundancy

Server redundancies include failover clusters and load balancing. Failover clusters remove a
server as a single point of failure. If one node in a cluster fails, another node can take over.


Some services require a high level of availability and it’s possible to achieve 99.999 percent
uptime, commonly called five nines. It equates to less than 6 minutes of downtime a year: 60 minutes
× 24 hours × 365 days × .00001 = 5.256 minutes. Failover clusters are a key component used to
achieve five nines.

Although five nines is achievable, it’s expensive. However, if the potential cost of an outage is
high, the high cost of the redundant technologies is justified. For example, some web sites generate a
significant amount of revenue, and every minute a web site is unavailable represents lost money.
High-capacity failover clusters ensure the service is always available even if a server fails.

Failover Clusters for High Availability

The primary purpose of a failover cluster is to provide high availability for a service offered by
a server. Failover clusters use two or more servers in a cluster configuration, and the servers are
referred to as nodes. At least one server or node is active and at least one is inactive. If an active
node fails, the inactive node can take over the load without interruption to clients.
Consider Figure 9.1, which shows a two-node failover cluster. Both nodes are individual
servers, and they both have access to external data storage used by the active server. Additionally, the
two nodes have a monitoring connection to each other used to check the health or heartbeat of each
other.

Figure 9.1: Failover cluster
Imagine that Node 1 is the active node. When any of the clients connect, the cluster software
(installed on both nodes) ensures that the clients connect to the active node. If Node 1 fails, Node 2
senses the failure through the heartbeat connection and configures itself as the active node. Because
both nodes have access to the shared storage, there is no loss of data for the client. Clients may notice
a momentary hiccup or pause, but the service continues.

You might notice that the shared storage in Figure 9.1 represents a single point of failure. It’s not
uncommon for this to be a robust hardware RAID-6. This ensures that even if two hard drives in the
shared storage fails, the service will continue. Additionally, if both nodes are plugged into the same
power grid, the power represents a single point of failure. They can each be protected with a separate
uninterruptible power supply (UPS), and use a separate power grid.

Cluster configurations can include many more nodes than just two. However, nodes need to have
close to identical hardware and are often quite expensive, but if a company truly needs to achieve
99.999 percent uptime, it’s worth the expense

Disk Redundancies - RAID

Disk Redundancies

Any system has four primary resources: processor, memory, disk, and the network interface. Of
these, the disk is the slowest and most susceptible to failure. Because of this, administrators often
upgrade disk subsystems to improve their performance and redundancy.

Redundant array of inexpensive disks (RAID) subsystems provide fault tolerance for disks and
increase the system availability. Even if a disk fails, most RAID subsystems can tolerate the failure
and the system will continue to operate. RAID systems are becoming much more affordable as the
price of drives steadily falls and disk capacity steadily increases.

A redundant array of inexpensive disks (RAID) system would provide fault tolerance for disk drives and increase data availability if drives fail.
A cluster provides fault tolerance at the server level and ensures a service continues to operate even if a server fails. However, a cluster is more expensive than a RAID.

RAID-0
RAID-0 (striping) is somewhat of a misnomer because it doesn’t provide any redundancy or
fault tolerance. It includes two or more physical disks. Files stored on a RAID-0 array are spread
across each of the disks.
The benefit of a RAID-0 is increased read and write performance. Because a file is spread
across multiple physical disks, the different parts of the file can be read from or written to each of the
disks at the same time. If you have three 500 GB drives used in a RAID-0, you have 1500 GB (1.5
TB) of storage space.

RAID-1
RAID-1 (mirroring) uses two disks. Data written to one disk is also written to the other disk. If
one of the disks fails, the other disk still has all the data, so the system can continue to operate without any data loss. With this in mind, if you mirror all the drives in a system, you can actually lose half of the drives and continue to operate.

You can add an additional disk controller to a RAID-1 configuration to remove the disk
controller as a single point of failure. In other words, each of the disks also has its own disk
controller. Adding a second disk controller to a mirror is called disk duplexing.
If you have two 500 GB drives used in a RAID-1, you have 500 GB of storage space. The other
500 GB of storage space is dedicated to the fault-tolerant, mirrored volume.

RAID-2, RAID 3, and RAID-4 are rarely used.

RAID-5 and RAID-6
A RAID-5 is three or more disks that are striped together similar to RAID-0. However, the
equivalent of one drive includes parity information. This parity information is striped across each of
the drives in a RAID-5 and is used for fault tolerance. If one of the drives fails, the system can read
the information on the remaining drives and determine what the actual data should be. If two of the
drives fail in a RAID-5, the data is lost.
RAID-6 is an extension of RAID-5, and it includes an additional parity block. A huge benefit is
that the RAID-6 disk subsystem will continue to operate even if two disk drives fail. RAID-6
requires a minimum of four disks.

Remember this
RAID subsystems, such as RAID-1, RAID-5, and RAID-6, provide fault
tolerance and increased data availability. RAID-5 can survive the failure of
one disk. RAID-6 can survive the failure of two disks.

RAID-10
A RAID-10 configuration combines the features of mirroring (RAID-1) and striping (RAID-0).
RAID-10 is sometimes called RAID 1+0. A variation is RAID-01 or RAID 0+1 that also combines
the features of mirroring and striping but implements the drives a little differently.

Software Versus Hardware RAID
Hardware RAID configurations are significantly better than software RAID. In hardware RAID,
dedicated hardware manages the disks in the RAID, removing the load from the operating system. In
contrast, the operating system manages the disks in the RAID array in software RAID. Hardware
RAID systems provide better overall performance and often include extra features.
For example, a hardware RAID may include six physical disks using four in an active RAID-6
configuration and two as online spares. If one of the active disks in the RAID-6 fails, the RAID will
continue to operate because a RAID-6 can tolerate the failure.

However, a hardware RAID can logically take the failed disk out of the configuration, add one
of the online spares into the configuration, and rebuild the array. All of this happens without any
administrator intervention. Hardware RAID systems are often hot swappable, allowing
administrators to swap out the failed drive without powering the system down.

Source: Darril Gibson 

Friday, October 2, 2015

NoSQL Versus SQL Databases

NoSQL Versus SQL Databases
Server-based SQL databases are traditional relational databases using tables that relate to each
other in one way or another. They are very effective in many situations, but not all. A newer type of
database has emerged known as not only SQL (NoSQL).
NoSQL databases typically hold one or more of the following types of data: documents, keyvalue
pairs, or graphs. Documents are formatted in a specific way and each document represents an
object. This is similar to how a table holds data in rows. However, the document-based NoSQL
database gives developers much more flexibility in how they can store and query the data.
Both NoSQL and SQL databases are susceptible to command injection attacks if developers do
not implement input validation techniques. SQL databases use SQL queries and are susceptible to
SQL injection attacks. NoSQL databases use unstructured query language (UQL) queries. Although the
format of UQL queries varies with different vendors, attackers can learn them and use them when
developers do not implement input validation techniques

Buffer Overflows and Buffer Overflow Attacks

Buffer Overflows and Buffer Overflow Attacks

A buffer overflow occurs when an application receives more input, or different input, than it
expects. The result is an error that exposes system memory that would otherwise be protected and
inaccessible. Normally, an application will have access only to a specific area of memory, called a
buffer. The buffer overflow allows access to memory locations beyond the application’s buffer,
enabling an attacker to write malicious code into this area of memory.
As an example, an application may be expecting to receive a string of 15 characters for a
username. If input validation is not used and it receives more than 15 characters, it can cause a buffer
overflow and expose system memory. The following HTTP GET command shows an example of
sending a long string to the system to create a buffer overflow: GET /index.php?
username=ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
The buffer overflow exposes a vulnerability, but it doesn’t necessarily cause damage by itself.
However, once attackers discover the vulnerability, they exploit it and overwrite memory locations
with their own code. If the attacker uses the buffer overflow to crash the system or disrupt its
services, it is a DoS attack.
More often, the attacker’s goal is to insert malicious code in a memory location that the system
will execute. It’s not easy for an attacker to know the exact memory location where the malicious
code is stored, making it difficult to get the computer to execute it. However, an attacker can make
educated guesses to get close.
A popular method that makes guessing easier is with no operation (NOP, pronounced as “noop”)
commands, written as a NOP slide or NOP sled. Many Intel processors use hexadecimal 90
(often written as x90) as a NOP command, so a string of x90 characters is a NOP sled. The attacker
writes a long string of x90 instructions into memory, followed by malicious code. When a computer is
executing code from memory and it comes to a NOP, it just goes to the next memory location. With a
long string of NOPs, the computer simply slides through all of them until it gets to the last one and
then executes the code in the next instruction. If the attacker can get the computer to execute code from
a memory location anywhere in the NOP slide, the system will execute the attacker’s malicious code.
The malicious code varies. In some instances, the attackers write code to spread a worm through
the web server’s network. In other cases, the code modifies the web application so that the web
application tries to infect every user who visits the web site with other malware. The attack
possibilities are almost endless.
Remember this
Buffer overflows occur when an application receives more data than it can
handle, or receives unexpected data that exposes system memory. Buffer
overflow attacks often include NOP instructions (such as x90) followed by
malicious code. When successful, the attack causes the system to execute
the malicious code. Input validation helps prevent buffer overflow attacks.
A buffer overflow attack includes several different elements, but they happen all at once. The
attacker sends a single string of data to the application. The first part of the string causes the buffer
overflow. The next part of the string is a long string of NOPs followed by the attacker’s malicious
code, stored in the attacked system’s memory. Last, the malicious code goes to work.
In some cases, an attacker is able to write a malicious script to discover buffer overflow
vulnerabilities. For example, the attacker could use JavaScript to send random data to another service
on the same system.
Although error-handling routines and input validation go a long way to prevent buffer overflows,
they don’t prevent them all. Attackers occasionally discover a bug allowing them to send a specific
string of data to an application causing a buffer overflow. When vendors discover buffer overflow
vulnerabilities, they are usually quick to release a patch or hotfix. From an administrator’s
perspective, the solution is easy: Keep the systems up to date with current patches.

Source: Darril Gibson Book Sec+

Thursday, October 1, 2015

How TCP sessions use a three way handshake

How TCP sessions use a three way handshake

When establishing a session, two systems normally start a TCP session by exchanging three packets in a TCP handshake. For example, when a client establishes a session with a server, it takes the following steps:
1. The client sends a SYN (synchronize) packet to the server.
2. The server responds with a SYN/ACK (synchronize/acknowledge) packet.
3. The client completes the handshake by sending an ACK (acknowledge) packet. After
establishing the session, the two systems exchange data.

However, in a SYN flood attack, the attacker never completes the handshake by sending the
ACK packet. Additionally, the attacker sends a barrage of SYN packets, leaving the server with
multiple half-open connections.

In some cases, these half-open connections can consume a server’s resources while it is waiting
for the third packet, and it can actually crash. More often though, the server limits the number of these half-open connections. Once the limit is reached, the server won’t accept any new connections,
blocking connections from legitimate users. For example, Linux systems support an iptables command
that can set a threshold for SYN packets, blocking them after the threshold is set. Although this
prevents the SYN flood attack from crashing the system, it also denies service to legitimate clients.

Logical Volume Manager Survival Guide

Logical Volume Manager Survival Guide

Martin Zahn, 05.03.2010

Overview

 Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM,«logical» partitions can span across physical hard drives and can be resized. A physical disk is divided into one or more physical volumes (PVs), and logical volume groups (VGs) are created by combining PVs. Notice the VGs can be an aggregate of PVs from multiple physical disks.

Example Configuration

This article describes a Linux logical volume manager by showing an example of configuration and usage. We use RedHat Linux for this example.
Physical Volumes PV
With LVM, physical partitions are simply called «physical volumes» or «PVs». These PVs are usually entire disks but may be disk partitions, for example /dev/sda3 in the above figure. PVs are created with pvcreate to initialize a disk or partition.
Command
Remarks
pvcreate
Initialize a disk or partition for use by LVM
pvchange
Change attributes of a physical volume
pvdisplay
Display attributes of a physical volume
pvmove
Move physical extents
pvremove
Remove a physical volume
pvresize
Resize a disk or partition in use by LVM2
pvs
Report information about physical volumes
pvscanScan all disks for physical volumes
Example: pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Physical Volume Groups VG
The PVs in turn are combined to create one or more large virtual disks called «volume groups» or «VGs». While you can create many VGs, one may be sufficient. A VG can grow or shrink by adding or removing PVs from it.
The command vgcreate creates a new volume using the block special device previously configured with pvcreate.
Command
Remarks
vgcreate
Create a volume group
vgchange
Change attributes of a volume group
vgdisplay
Display attributes of volume groups
vgcfgbackup
Backup volume group descriptor area
vgcfgrestore
Restore volume group descriptor area
vgck
Check volume group metadata
vgconvert
Convert volume group metadata format
vgexport
Make volume groups unknown to the system
vgextend
Add physical volumes to a volume group
vgimport
Make exported volume groups known to the system
vgmerge
Merge two volume groups
vgmknodes
Recreate volume group directory and logical volume special files
vgreduce
Reduce a volume group
vgremove
Remove a volume group
vgrename
Rename a volume group
vgs
Report information about volume groups
vgscan
Scan all disks for volume groups and rebuild caches
vgsplit
Split a volume group into two
Example: vgcreate VGb1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Logical Volumes LV
Once you have one or more physical volume groups you can create one or more virtual partitions called «logical volumes» or «LVs». Note each LV must fit entirely within a single VG.
The command lvcreate creates a new logical volume by allocating logical extents from the free physical extent pool of that volume group.
Command
Remarks
lvcreate
Create a logical volume in an existing volume group
lvchange
Change attributes of a logical volume
lvdisplay
Display attributes of a logical volume
lvextend
Extend the size of a logical volume
lvmchange
Change attributes of the logical volume manager
lvmdiskscan
Scan for all devices visible to LVM2
lvreduce
Reduce the size of a logical volume
lvremove
Remove a logical volume
lvrename
Rename a logical volume
lvresize
Resize a logical volume
lvs
Report information about logical volumes
lvscan
Scan (all disks) for logical volumes
Example: lvcreate -L 400 -n LVb1 VGb1
This creates a logical volume, named «LVb1», with a size of 400 MB from the virtual group «VGb1».
Filesystems
Finally, you can create any type of filesystem you wish on the logical volume, including as swap space. Note that some filesystems are more useful with LVM than others. For example not all filesystems support growing and shrinking. ext2, ext3, xfs, and reiserfs do support such operations and would be good choices.

Creating the Root Logical Volume «LVa1» during Installation

The physical volumes are combined into logical volume groups, with the exception of the /boot partition. The /bootpartition (/dev/sda1) cannot be on a logical volume group because the boot loader cannot read it. If the root partition is on a logical volume, create a separate /boot partition which is not a part of a volume group. In this example the swap space (/dev/sda2) is also created on a normal ext3 partition. The setup of the LVM for the root filesystem (/dev/sda3) is done during the installation of RedHat Linux.
After creating the /boot filesystem and the swap space, select the free space and create the physical volume for/dev/sda3 as shown in the next figure.
  1. Select New.
  2. Select physical volume (LVM) from the File System Type pulldown menu.
  3. You cannot enter a mount point yet.
  4. A physical volume must be constrained to one drive.
  5. Enter the size that you want the physical volume to be.
  6. Select Fixed size to make the physical volume the specified size, select Fill all space up to (MB) and enter a size in MBs to give range for the physical volume size, or select Fill to maximum allowable size to make it grow to fill all available space on the hard disk.
  7. Select Force to be a primary partition if you want the partition to be a primary partition.
  8. Click OK to return to the main screen.
The result is shown in the next figure, the physical volume PV is located on /dev/sda3.
Once all the physical volumes are created, the volume groups can be created.
  1. Click the LVM button to collect the physical volumes into volume groups. A volume group is basically a collection of physical volumes. You can have multiple logical volumes, but a physical volume can only be in one volume group.
     
  2. Change the Volume Group Name if desired.
     
  3. Select which physical volumes to use for the volume group.
Enter the name for the logical volume group as shown in the next figure.
The result is the logical volume group VGa1 located on the physical volume /dev/sda3.

Creating the Logical Volume «LVb1» manually

Create Partitions
For this LVM example you need an unpartitioned hard disk /dev/sdb. First you need to create physical volumes. To do this you need partitions or a whole disk. It is possible to run pvcreate command on /dev/sdb, but I prefer to use partitions and from partitions I later create physical volumes.
fdisk -l
....
Device Boot Start End Blocks Id System
/dev/sda1 * 1 127 1020096 83 Linux
/dev/sda2 128 382 2048287+ 82 Linux swap / Solaris
/dev/sda3 383 2610 17896410 8e Linux LVM
....
The partition type for LVM is 8e.
fdisk /dev/sdb

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 
1
First cylinder (1-2136, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2136, default 2136):
Using default value 2136

Command (m for help): 
t
Selected partition 1
Hex code (type L to list codes): 
8e
Changed system type of partition 1 to 8e (Linux LVM)

Command (m for help): 
w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
This is done for all other disks as well.
Create physical volumes
Use the pvcreate command to create physical volumes.
pvcreate /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Physical volume "/dev/sdb1" successfully created
Physical volume "/dev/sdc1" successfully created
Physical volume "/dev/sdd1" successfully created
Physical volume "/dev/sde1" successfully created
Create physical volume group VGb1
At this stage you need to create a physical volume group which will serve as a container for your physical volumes. To create a virtual group with the name «VGb1» which will include all partitions, you can issue the following command.
vgcreate VGb1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Volume group "VGb1" successfully created
vgdisplay

 --- Volume group ---
VG Name VGb1
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 65.44 GB
PE Size 4.00 MB
Total PE 16752
Alloc PE / Size 16717 / 65.30 GB
Free PE / Size 35 / 140.00 MB
VG UUID 2iSIeo-dw0Q-NA07-HUt0-Pjxq-m3gh-f33lAh
Create Logical Volume Group LVb1
To create a logical volume, named «LVb1», with a size of 400 MB from the virtual group «VGb1» use the following command.
lvcreate -L 65.3G -n LVb1 VGb1

Rounding up size to full physical extent 65.30 GB
Logical volume "LVb1" created
Create File system on logical volumes
The logical volume is almost ready to use. All you need to do is to create a filesystem.
mke2fs -j /dev/VGb1/LVb1

mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
8568832 inodes, 17118208 blocks
855910 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
523 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Edit /etc/fstab
Add an entry for your newly created logical volume into /etc/fstab
/dev/VGa1/LVa1          /                       ext3    defaults        1 1
/dev/sda1               /boot                   ext3    defaults        1 2
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
sysfs                   /sys                    sysfs   defaults        0 0
/dev/sda2               swap                    swap    defaults        0 0
/dev/VGb1/LVb1          /u01                    ext3    defaults        1 3
mount -a
You can now use the filesystem, for the maintenance use one of the above LVM commands.