Linux - Create a Linux Bridge Hub

 

How do Linux Bridge Hubs Work?

Linux bridges have been around since ancient times and are a simple, easy way to attach multiple physical and virtual ports in a Linux system. Layer-2 bridging (or switching) takes place in the Linux kernel, so network frame transfer is quick with minimal latency. For all intents and purposes, the Linux bridge serves the majority of situations that involve traffic going from one port to another efficiently, much like dedicated hardware network switch. 

Bridges and switches learn where devices reside by making a table of MAC addresses associated with ports. For example, when a frame is received on a physical or virtual port from system A, the source MAC address and the incoming port of system A is added to the table. Then the frame is broadcast out all other ports since the destination port for the MAC address of system B is not yet known. When system B responds, the same thing happens but in the opposite direction. The source MAC address and incoming port association for system B are placed in the table. Now the bridge/switch knows which ports both systems are connected and can direct network frames only between these two ports, using the most efficient path. The Linux kernel maintains a timer for each table entry to prevent table fill up. When a system stops sending frames, like when it is shut down, the kernel waits 300 seconds by default and then removes that entry from the table.

There are occasions where we need to have certain devices see all the traffic going across the bridge, not just from one port to another. For instance, we may have Intrusion Detection System (IDS) or packet sniffing virtual machines that need to see all traffic for analysis and anomaly alerting. In this case, we need the Linux bridge to act like an Ethernet hub. Hubs are legacy devices that repeat all frames out every port so all devices see all traffic in contrast to the one-to-one mechanism that Linux bridges natively use. 

To allow all devices to see all frames, the timer (also called ageing value) is set to zero. In essence, the table is essentially disabled and source MAC address to port mappings aren't recorded. Each frame received is then sent out all ports all the time because there the destination is always unknown (no entry in the table). The drawback is that every system has to process each frame to determine if it's for itself, taking up processing time versus having directed frames sent to it. Modern systems and VMs have plenty of computing power, so this has a minimal impact in today's environments. This is an interesting way to manipulate the Linux bridge to get the results we need.  

Configuration

There are two ways to change the ageing timer to get this hub affect, dynamically and statically. We'll cover both below. 

bridge-utils

The bridge-utils package has a program that allow you to change the ageing value on the fly. The brctl command (stands for "bridge control") has been around for a long time, but is a great tool for Linux bridge manipulation. 

Install:

root@prxmx1:~# apt-get -y install bridge-utils

Show brctl command options:

root@prxmx1:~# brctl --help
Usage: brctl [commands]
commands:
    addbr         <bridge> add bridge
    delbr         <bridge> delete bridge
    addif         <bridge> <device> add int to bridge
    delif         <bridge> <device> delete int from bridge
    hairpin       <bridge> <port> {on|off}turn hairpin on/off
    setageing     <bridge> <time>  set ageing time
    setbridgeprio <bridge> <prio>  set bridge priority
    setfd         <bridge> <time>  set bridge forward delay
    sethello      <bridge> <time>  set hello time
    setmaxage     <bridge> <time>  set max message age
    setpathcost   <bridge> <port> <cost> set path cost
    setportprio   <bridge> <port> <prio> set port priority
    show          [ <bridge> ] show a list of bridges
    showmacs      <bridge> show a list of mac addrs
    showstp       <bridge> show bridge stp info
    stp           <bridge> {on|off} turn stp on/off
root@prxmx1:~#

Let's create a temporary bridge which we will use to create our bridge hub.

root@prxmx1:~# brctl addbr br0
root@prxmx1:~# brctl addif br0 eth1
root@prxmx1:~# brctl addif br0 eth2
root@prxmx1:~# brctl show
bridge name    bridge id            STP enabled    interfaces
br0            8000.000c297a82bc    no             eth1
                                                   eth2
root@prxmx1:~#
Commands explanation:

  • brctl addbr br0 - creates the linux bridge in the linux kernel
  • brctl addif eth1 - adds the eth1 interface to the bridge br0
  • brctl addif eth2 - adds the eth2 interface to the bridge br0
  • brctl show - shows all the configured linux bridge info

As you can see from the previous --help output, one of the options for brctl is setageing. If we set the setageing parameter to zero, then all interfaces attached to the bridge will receive all network frames regardless of the destination. This is what we want so our IDS device will see all the network traffic. The bridge will essentially time out every entry in the kernel MAC address table immediately as each frame is received and not keep the port association information.

To do this, we use the following command:

root@prxmx1:~# brctl setageing br0 0

This takes affect immediately, you can verify this by issuing:

root@prxmx1:~# brctl showstp br0
br0
 bridge id       8000.000c297a82bc
 designated root 8000.000c297a82bc
 root port       0            path cost            0
 max age         20.00        bridge max age       20.00
 hello time      2.00         bridge hello time    2.00
 forward delay   15.00        bridge forward delay 15.00

 ageing time     0.00
 hello timer     0.00         tcn timer            0.00
 topology change timer  0.00  gc timer             0.00
 ...

root@prxmx1:~#

As you can see, the ageing time is now at zero and all ports will receive all traffic.

Keep in mind that the above configuration is in effect only until the system is rebooted. To make the configurstion permenent and effective after a reboot, edit your networking file as follows. I'm using debian Linux in this example, so use your Google-fu for the Linux distro you're using.

    root@prxmx1:~# cat /etc/network/interfaces
    auto lo
    iface lo inet loopback

    auto eth0
    iface eth0 inet static
        address x.x.x.x
        netmask 255.255.255.0
        network x.x.x.0
        broadcast x.x.x.255

    auto br0
    iface br0 inet static
        address x.x.x.x.x
        netmask 255.0.0.0
        network x.x.x.0
        broadcast x.x.x.255
        bridge_ports eth1 eth2
        bridge_stp off
        bridge_fd 0

        bridge_ageing 0
        bridge_maxwait 0

    auto eth1
    iface eth1 inet manual

    auto eth2
    iface eth2 inet manual
    root@prxmx1:~#

Notice the bridge ports (eth1, eth2) are set to manual mode on boot. This way they can be added to the bridge without unnecessary and conflicting configurations.

Verification:

You can now verify that your bridge hub is working by running a tool called tcpdump. tcpdump captures network frames on an interface, shows them on the command line or records them to a file to view in tcpdump or Wireshark later. Wireshark is a GUI application that allows you to graphically analyze network packets (a very cool and recommended tool! Look it up). 

To see if a port is receiving all network traffic, run tcpdump on the interface attached to our IDS system, recording to a packet capture file (.pcap format).

root@prxmx1:~# tcpdump -i eth1 -w traffic-file.pcap 

10456 packets captured
10456 packets received by filter
0 packets dropped by kernel

root@ubuntu-2004-1-texting:~# ^C

Command explanation: 

  • tcpdump - base command
  • -i eth1 - identifies the interface to capture traffic from
  • -w traffic-file.pcap - tells tcpdump to write the captured packets to a file 
  • ^C [ctrl-c] - tells tcpdump to quite capturing traffic

Now, when we look at the traffic file tcpdump created, we see traffic from all the ports. When looking at the layer-3 information, we should see traffic to and from IP addresses not assigned to the port being captured on.

root@prxmx1:~# tcpdump -r dumpfile.pcap
reading from file dumpfile.pcap, link-type EN10MB (Ethernet)
19:33:37.432852 IP 192.168.209.100.ssh > 192.168.209.1.51296: Flags [P.], seq 1748148483:1748148607, ack 4192579883, win 501, options [nop,nop,TS val 3185763333 ecr 608909605], length 124
19:33:37.432967 IP 192.168.209.1.51296 > 192.168.209.100.ssh: Flags [.], ack 124, win 2046, options [nop,nop,TS val 608909660 ecr 3185763333], length 0

...
root@prxmx1:~#

Command explanation:

  • tcpdump - base command
  • -r dumpfile.pcap - reads the .pcap file and shows the network traffic to the screen

Now, your IDS can have full view of all traffic that transverses the bridge hub and is looking for suspect and malicious communications between the virtual machines and the outside networks. This is great for virtualized environment and especially when doing hacking / security scenarios in a testing or home lab. 

When labbing up a scenario up for testing or learning, run an IDS, such as Security Onion, attached the bridge hub to see what your attacks look like on the network. Look at your IDS tools and see if the attack traffic has fired off alerts and if not, learn to write signatures that will do so. This is an awesome purple team exercise to see what attacks and breaches look like from both sides of the security table!

Home Lab - Hardware (Part 1)

Why a home lab?

As a cybersecurity professional (or any technical individual for that matter), a home lab is advantageous to truly understanding the systems, networks and protocols that make up enterprise networks. With a base understanding of how things should work, then you can learn how to exploit systems in ways that shouldn't work. 

The experience of creating, maintaining and using a home lab, with all the configuration and troubleshooting involved, will give you hands-on experience that you simply won't get from textbooks. Sure, home labs aren't the same scale as corporate environments, but you will be able to build, configure and test technologies that you shouldn't on production networks and systems. Unless you work for some companies where their production network IS their testing network (now that doesn't happen at all does it?).

What's cool about having your own home lab:

  • you can break whatever you want and you will only affect your systems and the time you take fixing it
  • you can customize all of it, you don't have to accept a network or system that someone has else built, with all its idiosyncrasies and issues
  • if you no longer want to work on a project, wipe it out and build something else, its your technical cave
  • want to try the new coolest Linux distro, latest Microsoft OS or some awesome PTP application? Just fire up a virtual machine and do it!
It comes down to control, full control. From learning to install server hardware components to implementing virtual machines, containers or an on-premises, private cloud, it's all up to you. Heck, you can make your own Cat5 cables if you like (practical but that may not be your idea of a fun time with your buddies).

You may be asking, "why don't you just spin up systems in the cloud, it's pretty cheap to do so." Well, in my line of administration work, I occasionally have opportunities to acquire used servers and devices for pretty cheap. Plus, I prefer to pay for something once and use it for a long time versus a re-occurring bill from a cloud provider every month. Don't get me wrong, I'm an advocate for the cloud since we use lot of cloud services at my work, but personally, I like having the ability to connect to my systems at any time and pick up where I left off without worrying about CPU cycles eating into my wallet. Plus, I can run as many virtual machines as my hardware can handle (I've run around 50 or so at the same time in the past, imagine that AWS bill!). Like they say, the cloud is just someone else's computer anyways...

My Lab

In the beginning, I created my home lab as a way to learn Cisco networking.  As more and more systems and devices have become virtual machines (VMs), my physical routers are gathering dust while my pizza box  servers are the ones getting used. This is great for us technical heads, since less hardware is needed to accomplish our objectives. My home lab has evolved with my learning journey to whatever I have needed it to be.

My lab consists the following systems:

Home Lab - Hardware

  • 2x Dell PowerEdge R720 systems
    • 2x 8-core CPUs, 192GB RAM, 3TB HD space, 2x 10Gb fiber network interfaces
  • 2x HPE ProLiant DL360 G9 systems
    • 2x 12-core CPUs, 256GB RAM, 5TB HD space, 4x 10Gb fiber network interfaces 
  • 2x Cisco switches
    • 1Gb for management traffic, 10Gb for inter-server traffic
  • 1x Avocent KVM (keyboard, video, mouse)
    • server consoles accessible from across the network
  • 2x Power Distribution Units (PDUs)
    • power controls for all systems/devices from across the network
  • 2x Linksys WRT-54G Wireless Access Points (WAPs)
    • older units for doing WiFi testing and attacks
These items have been acquired over a long period of time and are certainly overkill for most home labs. These systems run a lot of VMs for various purposes, but the majority of them are security related.
 

Hardware for Beginner Lab

If you are just starting out with your home lab project, you can just about any second system to start. Some have used an old laptop or desktop that was stored in a deep closet crevice as a first home lab system. If you have a pretty beefy laptop or desktop, you can run a virtualization program such as VMware Workstation (Windows) or VMware Fusion (MacOS) to run a couple of VMs at a time.

If you have the itch for something bigger (home labs can be addictive), there are tons of systems for sale on eBay or Craigslist for a much reduced cost. A great deal can be had with patience and diligent looking. 

What has worked for me in the past when looking for used servers is to first, look for a base system for a decent price. A base system consists of the chassis, motherboard and power supplies. Most servers come with a CPU, RAM and a hard drive, but it may be very minimal. This isn't necessarily a bad thing. It's a starting point that you can build upon. 

As time goes on and your needs increase (which they most certianly will), look into adding more RAM DIMMs, which is is the main factor for how many VMs you can run simultaneously. CPUs will often run pretty low for most VMs (depending on operating system), but memory will be consumed even if the VM's activity minimal. Make sure you acquire RAM DIMMs that are compatible with your motherboard AND any current and future CPUs that you want to purchase. CPUs are particular about the number and types of memory DIMMs they are paired with, so shop carefully. Be aware that some systems can handle DIMMs of different sizes and speeds, but others may not. Do your research on the system vendor's documentation and website to avoid making an ill purchase.

Of course, who doesn't want more horsepower?  Once you work our the  details for your memory DIMMs, adding a second CPU should be next on the list. It's recommended that both processors are identical for maximum compatibility. In most cases, it's required that they be identical. Make sure the manufacturer, number of cores, clock speed, the stepping value and socket type are correct for your system's motherboard. 

The next item usually is network interface cards (NICs) and possibly lots of them. Most server systems within the last ten years or so have at least two NICs integrated into the motherboard, but for inter-server networking, its always helpful to have multiple network interfaces. Luckily, dual and quad-port Gigabit ethernet cards are fairly inexpensive. 10 Gigabit fiber ethernet cards can be pricey along with the associated cables, but they are coming down in price on the used market. Verify that whatever cards you purchase are compatible with the hypervisor you need to run. VMware ESXi is especially particular with this, so check your hypervisor's version compatibility list to be sure. Linux-based hypervisors are less prone to have compatibility issues, but use your Google-fu to reassure yourself.

This has been post 1 of 4 part series relating to building a home lab. See the other parts for additional information.

Part 2 - Software

Part 3 - Networking   COMING SOON!
 
Part 4 - InfoSec    COMING SOON!

Kali Linux - Post-Install Checklist

 After installing Kali Linux, either to a dedicated system or a virtual machine, there are tools that need to be installed to make hacking life a bit easier.

This post will discuss the applications that I install after a clean Kali Linux installation. It won't cover how to do the initial installation, since there are numerous videos and blogs that already explain that process.

Terminator

In my quest to find the right terminal emulator, I was impressed with Terminator. It's been around for quite awhile and works really well. 

terminator screenshot

Features include the ability to split a console window into several panes, create several tabs of multiple panes, configuration profiles, custom layouts, custom key-binding and more. I haven't used all the cool features of Terminator, but what I have used works efficiently for me.

Install:

root@kali:~# apt-get -y install terminator

xrdp

In the past, I've tried VNC server programs for remote desktop access but overall, I haven't been flattered. The VNC protocol has it's place but I have to admit, Microsoft did a pretty good job with the RDP protocol. It's efficient, light on the network, provides resource redirection option for disk and USB drives, sound, clipboard and it's fairly easy to use. The only drawback is that it's a one-to-one protocol and multi-user access would be a really welcome feature that would knock it out of the park.

So, you'd think that accessing Linux desktops via RDP would be sketchy, since Linux and Windows can be oil and water with compatibility. Not so much when using xrdp. After installation, I'm able to connect to my Kali Linux desktop almost right away with minimal configuration. I like when it just works.

The only gotcha to remember is that the user account that is being used to access the desktop via RDP needs to be logged out of the X windows environment on the local console. Other than that, easy, peasy. 

Install: 

root@kali:~# apt-get -y install xrdp

htop/iftop

The next two tools I install are local system monitoring programs - htop and iftop. When doing scanning, attacks or other activities, applications may be running but not provide progress output on the command line. A quick look at two terminals open with htop and iftop tells me that the application is indeed doing its thing, since I can see the processes and network connections in real time. It also helps to see the used system resources to determine if I need to add memory or CPUs, which may be contributing to slowness. These tools are lightweight, so I have no qualms about letting them run continually. 

Install:

root@kali:~# apt-get -y install htop iftop

Making of CrackerJack - The Rig (Part 1)

Security professionals, whether offenders or defenders, need to know how to recover passwords (or what's more commonly called cracking passwords). This may include cracking acquired Windows SAM or AD database files, Linux shadow files, performing password auditing of administrator and user accounts, along with other security purposes. 

A powerful tool that assists with with these tasks is a password cracking machine, often called a password cracking "rig". These systems often resemble the cryptocurrency mining rigs, but have the hardware and software are tailored around password hash cracking. Both the cryptomining of certain blockchains and password cracking have similarities in the way they are processed, both are optimal for parallel processing. Therefore, these systems contain multiple graphical processing units (GPUs), essentially, PCI-e cards that gamers use to play complex video games. GPUs are the main engine components that will do the needed parallel processing that gets those password hashes cracked.