Using sfdisk to recover a partition table on Linux

As he re-entered the sfdisk dump manually in the remote recovery console, using the devil’s editor (vi), he was silently thanking the Linux developers for not screwing around with the file system when it cannot be mounted.

Messing around with partition tables, disk volumes, and similar critical configuration parameters can lead to quite unexpected and unintended results. So, it may be a good idea to actually dump the current configuration before you begin your magic.

Using sfdisk, you can dump your Linux partition configuration in a fairly straightforward way. You can try the command by typing just sfdisk -d /dev/disk, where disk is one of the disks in your Linux system. For a list of disks in your system, use the lsblk command. They are identified as “disk” (surprise).

sfdisk -d /dev/sda > sda.txt

This would dump the partition table data for the /dev/sda disk to the file sda.txt. Your output will look something like this:

label: dos
label-id: 0xa828a5d8
device: /dev/sda
unit: sectors

/dev/sda1 : start= 2048, size= 997376, type=83, bootable
/dev/sda2 : start= 999424, size= 999424, type=82
/dev/sda3 : start= 1998848, size= 249659359, type=83

The partition table information can then later be restore by issuing the reverse, i.e.

sfdisk /dev/sda < sda.txt

DO NOT PERFORM THE ABOVE COMMAND IF YOU DON’T KNOW WHAT YOU ARE DOING!

This procedure may come in handy if you, like me, manage to screw up the partition table and find yourself at the (initramfs) prompt when you restart your Linux machine. You will (obviously) need to save the dump file (sda.txt above) in a location other than your computer. Using this method, it’s often possible to recover your partition table from a rescue boot (be it on CD, DVD or a flash drive).

I happened to have a previous terminal session window open with the above information, so I hand-typed it from one window to another, where I was running the remote recovery console.

There are a lot more complex partition setups than the above, and sfdisk may not work in those cases or for certain RAID and LVM setups. But it it’s a good procedure in applicable situations.

Show which process/program is listening to what port using netstat and lsof

lsof -Pnl +M -i4
lsof -Pnl +M -i6

or

netstat -tulpn
netstat -npl

There are obviously a number of ways to accomplish this, but these variations will cover a lot of ground. You can also combine this with grep to filter out things you don’t need to see, or to only include specific processes and/or ports.

See post from @geek1968 on Instagram

Using MTR to create a text file report

mtr --report --report-cycles 20 www.instagram.com > ignetrpt.txt

The above should be entered on one line. Using MTR this way makes it easy to simply send a trace report via e-mail. It can also be used in an automated way to generate a report e-mail when a system monitor fails.

See post from @geek1968 on Instagram

SSH tunnel to use other mailserver than localhost

Because I have a lot of virtual machines, laptops, work environments, and so on, I never seem to find the time to setup SMTP authentication everywhere. I typically use Linux for everything except hardcore gaming, so it’s only natural that I have some sort of mail server installed like Postfix. The problem in using that mail server to send e-mail is that I also quite often have dynamic IP addresses on these machines, which doesn’t work well with “e-mail protection” (well..) like SPF.

So instead of making my life very complicated, I have a trusted server on the Internet through which I send e-mail.

If you were looking for something fancy in this article, you can move along now, there’s nothing to see 🙂

To make all my Linux work instances believe they’re talking to an SMTP server locally, I simply setup a tunnel from the given Linux instance to this trusted server on the Internet using the ever so versatile OpenSSH / SSH. I know there are a lot of ways to do this, but this is what works for me:

Local machine or “where I work”

I have a private/public key keypair on all of these machines. The public key is placed in the /root/.ssh/authorized_keys file on the trusted server that is running the mail server.

On this machine, as root, I setup a tunnel that looks like this:

ssh -N -L 25:localhost:25 root@mail.example.org -p 2222

This will create a tunnel from “localhost” port 25 (where I work) to “localhost” port 25 on mail.example.org. It will connect the end point of the tunnel to mail.example.org on port 2222. If the mail.example.org server is running an SSH server on its standard port (22), you can remove the “-p 2222” part.

Mail server

On this server, I only need to put the public key from the local machine “where I work” into /root/.ssh/authorized_keys to allow the tunnel to come up.

When I access port 25 on my local machine “where I work”, it will be sent through the tunnel and then attempt to access “localhost” port 25 on the mail server. The mail server software, Postfix in my case, will never know this connection did not actually originate from “inside” the machine, but rather through the tunnel.

Closing thoughts

You can (obviously) make this somewhat more automated with tools like AutoSSH, init scripts, and what not. The above only intends to show how uncomplicated it is to create useful SSH/SMTP tunnels 🙂

 

Var Àr den oberoende och kompetenta myndigheten för IT?

Man kanske skall vara öppen med faktum att 2017 förmodligen INTE Ă€r det Ă„r som gĂ„r till historien dĂ„ flest snedtramp gjorts vad gĂ€ller IT och IT-driften hos myndigheter, riksdagen, osv. Snarare rĂ„kar vi, av “ren tur”, ha kommit pĂ„ att detta skett och fortsĂ€tter att ske, med makthavarnas vetskap. FortsĂ€tter det sĂ„ hĂ€r, sĂ„ kanske vi skulle lĂ€gga upp samhĂ€llskritisk och i mĂ„nga fall hemlig information pĂ„ en öppen server sĂ„ att Google kan indexera det (göra det sökbart). DĂ„ riskerar vi i alla fall inte att tappa bort informationen och samtidigt sparar vi enorma mĂ€ngder pengar. Win Win!

Men en större frĂ„ga tycker jag Ă€r: Varför har vi inte en oberoende myndighet som ansvarar för drift av och beslut gĂ€llande driften av IT-system inom offentlig förvaltning? Hur kommer det sig att vi 2017 fĂ„r reda pĂ„ att till och med regeringskansliet inte har nĂ„gon som helst aning om vad de sysslar med och vilka risker man tar nĂ€r man vĂ€ljer att “lĂ€gga ut driften” eller “ta in kompetens”?

Även om jag principiellt inte Ă€r för att ytterligare komplicera och försvĂ„ra den redan ganska röriga byrĂ„kratin vi har, sĂ„ kĂ€nns det som att detta faktiskt skulle kunna vara motiverat. Det Ă€r för mĂ„nga dinosaurier, det Ă€r för mĂ„nga politiska tillsĂ€ttningar och det saknas definitivt kunskaper.

Det kanske Àr dags att tillsÀtta en grupp mÀnniskor och bilda en ny myndighet, dÀr de tre yttersta kraven Àr IT-kunskap, sekretess och oberoende. NÀr vi i de flesta andra situationer strÀvar efter att ha bÀst kompetens pÄ rÀtt plats, varför Àr det inte sÄ nÀr det gÀller IT inom offentlig förvaltning?

Godtrohet Àr inte en giltig ursÀkt för inkompetens.

Kanske skulle detta falla under MSBs ansvar, kanske inte.

Jag har sagt det tidigare och sÀger det igen: Vi har bara sett toppen pÄ ett enormt isberg.

#svpol #fail #it #sakerhet

Forcing OutOfOffice response to always fire in Zimbra

We had a need to create an e-mail account in Zimbra that would always generate an automated response to incoming e-mails. So we activated the OutOfOffice functionality (or “Vacation Mode” as some people prefer to call it). This is great, and you do have some control from the ZWC (Zimbra Web Client) user interface.

The “problem” with the OOO functionality is that it is designed for human interaction. So, in an attempt to be somewhat “intelligent”, Zimbra will remember to whom it has sent an automated response message, and if a second message is received within nn time, it will not send another one. This makes sense, if I have sent an e-mail to John Doe, and Mr Doe is on vacation, I probably know this to be true even if I send him another message within a few hours or days. So I don’t want a second automated response.

We wanted it to send an automated response every time it received a message, zmprov to the rescue!

As the ‘zimbra’ user, from the CLI prompt, enter:

zmprov ma acct@tobemod.com zimbraPrefOutOfOfficeCacheDuration <value>

 

The default <value> in our installation was 7d, presumably that means seven days. So I set it to ‘1s’ and anyone sending e-mail to acct@tobemod.com now gets an automated response, even if they send several messages within a short period of time.

Troubles doing factory reset on a Ubiquiti EdgeRouter

If you’re having problems doing a factory reset on a Ubiquiti EdgeRouter, and can’t ping the router on 192.168.1.1 or connect to the admin web interface, you may want to check that you are connecting your computer to the eth0 port on the router. It’s not immediately obvious that this is where the admin interface is residing at https://192.168.1.1. Oh, and don’t forget to hardwire your own computer to the 192.168.1.0-network. This is really a no-brainer, but still not entirely obvious.

Slow SMTP sessions and SSH logins on your Zimbra server?

When upgrading a Zimbra server to a somewhat recent version (8.7.3 for example), it may attempt to install its own DNS Cache (zimbra-dnscache). It’s obvious that this may cause issues if you are running some other DNS caching service, or your own BIND, on the server. But these are rather obvious issues and not unique to Zimbra.

What is not, however, equally obvious is that you may think that zimbra-dnscache is actually running, and that it is actually doing what it is supposed to be doing.

My first hint that things weren’t as they appeared to be was extremely slow external SMTP sessions when clients like Thunderbird and other “client mailers”, as well as some web based Helpdesk applications were attempting to send e-mail via Zimbra.

The upgrade to Zimbra 8.7.3 had gone quite well, so it wasn’t an obvious place to start looking.

Until I noticed that SSH logins were also quite slow to this server. They had never been slow before. Checking the SSH configuration on the server did not reveal much other than the fact that it was indeed using reverse DNS lookups.

Checking /etc/resolv.conf made everything clear. Zimbra had, in attempt to use its own zimbra-dnscache, added “nameserver 127.0.0.1” to /etc/resolv.conf. In a perfect world, that may have been what I wanted …

After removing 127.0.0.1 from /etc/resolv.conf, inbound SMTP sessions from “client mailers” and web applications went from 7-10 seconds down to 0.5-0.1 seconds. Case closed.

I’m thinking Zimbra should add a post-installation sanity check. When all services are up and running, a DNS lookup to a known host (www.zimbra.com for example) should return within less than a second or two, anything else is an indication that the system may not function as intended.

#zimbra-dnscache