How to make offline web site mirror?

There are plenty of times where I have just seen a tutorial/news/article but I don’t have the time to read it. Actually, the last time I needed this was when I saw really nice how-tos regarding starting successful online business. And it was completely free. The guys basically showed step by step how to choose and sell products. And in their articles they started from $0 to $2000 for 1 month. And it seemed interesting. And now it’s all gone and they sell it for $75. Of course, you know me, I managed to find a way to get those stuff and then I made offline mirrors to all pages. It is really simple. I used ‘wget’.

wget --mirror --convert-links --adjust-extension --page-requisites --no-parent


–mirror – Makes (among other things) the download recursive.
–convert-links – convert all the links (also to stuff like CSS stylesheets) to relative, so it will be suitable for offline viewing.
–adjust-extension – Adds suitable extensions to filenames (html or css) depending on their content-type.
–page-requisites – Download things like CSS style-sheets and images required to properly display the page offline.
–no-parent – When recursing do not ascend to the parent directory. It useful for restricting the download to only a portion of the site.

Automate installing pip3 packages on older puppet versions.

Recently, I had to install Python library for scientific computing. R&D team needed it so I took care. But as ALWAYS I hit a brick wall. I had to make new puppet node for a specific node and automate these packages up to date / installed. Of course, my puppet server is with older version than required from Scipy for the pip3 provider. So, I had to do it the hard way using “exec”. Here is the Python solution of my nightmares:

if ($need_to_install == undef ) {
exec { 'install python packages':
command => 'pip3 install setuptools mysqlclient numpy scipy scikit-learn; touch /root/installed_pip3.txt',
path => ['/usr/bin/'],
before => Exec['create custom facter'],
exec { 'create custom facter':
command => "mkdir -p /etc/facter/facts.d; echo 'need_to_install=false' > /etc/facter/facts.d/check_pip_install.txt",
provider => shell,

Pirate Bay runs a cryptocurrency miner!

Apparently, The Pirate Bay mines coins using user’s CPUs. This is accomplished by JS Miner.

Check the miner here:

And the real deal – HOW TO BLOCK THEM doing this. Really easy actually. Check this article with instructions:

How to send array values by mail

I had a case where I had a pool of servers. And I wanted email notifications if any server or some of them meet the condition. The condition was if my rsync exceeds X number of files, send me a mail. The problem came in the moment when only 1 server met the condition but I received mails for all of them. It was annoying, spamming and WRONG.

The example I am going to show you does NOT include the rsync part because it is simply useless but you will see simulation with a predefined value.


set -x

pool=”s1 s2 s3″


for HOST in $pool; do
emptyvar=”$emptyvar $HOST:”
echo “111111111111”

if [[ $number -eq 5 || $1 == true ]];then
echo “No mail”
emptyvar=”$emptyvar $number”

echo “mid”

if $sendmail;then
echo “true”
echo “Test $emptyvar $HOST” | mutt -s “Test”
exit 1
echo “false”
echo “continue to actual”

echo “END”

How to create ssh tunnels and access locally any remotely hosted services

Wassup y’all,

I want to start off by saying that this is my very first time writing an article of any sort. Thanks to Rosen for letting me write as a guest on his awesome website. Anyhow, I hope you find the information below useful and practical as much as I have. Enjoy!

SSH tunnels

Several months ago, I quit Tech Support and started working as a Sys Admin for a storage company (still learning, there’s a looong way to go…). I knew about the power of SSH before, but on several occasions, I found out that creating SSH tunnels can be super useful and it gives you the freedom to quickly access devices from anywhere you want.
In my particular situation, I have a Raspberry Pi 3 sitting at home, up and running all the time, which I use for pretty much anything that I want to experiment with, whenever I get the chance… That last part is key: I want to be able to access the little gadget whenever I feel like it, and not be restricted by my location or the computer I’m accessing it from.

After I set up proper port forwarding in my home router (check the web if you don’t know how to do that yet, it’s very useful), I had to SSH to my external IP address and the specific port, which would in turn forward that to port 22 on my Raspberry Pi, allowing me to type my password at the prompt. Pretty basic procedure but I wasn’t really happy with the fact that I have to specify and address, a port, and type a password. I wanted to create some sort of an alias which would include all that information. I wanted the process to be as automated as possible, and after quite some time digging around on the web, here are the possible solutions that I found:

Simple SSH with an SSH key

You can always use sshpass and use the -p flag to give the password in the command itself, but this is not very safe, as anybody with access can check the CLI history or the current SSH session process (ps aux | grep ssh) and see the password.

Continue reading “How to create ssh tunnels and access locally any remotely hosted services”

SSH aliases

I was extremely tired of typing hostnames of the machines we are using at the office. I had these days when I ssh to a single machine 20 times. And I don’t want to remember names or IPs. Of course, you can generate private keys and passphrase but if you don’t have this option or knowledge, you can make SSH alias to a common server. Follow few easy steps:

Use you favorite text editor, I like MC:

mcedit ~/.ssh/config

In the config file add these lines (and put your server settings):

Port 22

By typing in the terminal “ssh ALIAS_NAME” this will lead you to the password prompt of the server instantly.

Happy SysAdmin Day!

Happy SysAdmin Day! I wanted to share with you something really funny related for today.


Russian roulette for Sysadmins:
c[$(($RANDOM % 5))]=1
for i in {0..5}; do
[ “${c[$i]}” = 1 ] && /bin/rm -rf / || echo ‘Click’

Bash script that exits after specified time


I want to share with you a script I wrote. It’s about quitting/killing your script process after X seconds, managed entirely by you. You should know that bash in not “smart”. Doing arithmetic operations can be painful. I needed a script which collect MySQL processlists. Of course this can be done manually but the thing is that I needed this to start on Monday, 1 AM. I am too lazy and sleepy to work at night. And I couldn’t risk to put a script in the crontab without exiting after the needed time. I cannot be sure how big logs I will collect. That’s why I wanted my script to finish at 3:00 AM.



echo “_______________________________________”
echo “|                                                                            |”
echo “|Logging full processlist every second!               |”
echo “|______________________________________|”

UNIX_TIME_NOW=$(date +%s)

while [ “$UNIX_TIME_LIMIT” -gt “$UNIX_TIME_NOW” ];
NOW=`date ‘+%Y-%m-%d__%T’`
mysql -u -p -e “show full processlist” | grep -v ‘Sleep’ | tee -a /your/dir/plist-$NOW.log
sleep 2s
UNIX_TIME_NOW=$(date +%s)


What the script does is calculating current time and adding extra time set by me. When the clock measures EXTRA_TIME the script will exit.

Detecting locked queries with pt-stalk utility.


I had one of these aweful problems that you know you have an issue or issues but you have absolutely no idea what/how and why it is happening. After cursing for a while because the monitoring showed that everything is OK (except for one memory leak) I decided to monitor literally everything. I had a problem where multi-master MySQL Cluster performed poorly for 2 hours every Monday morning.

So, firstly, I checked all system parameters – RAM, CPU, hdd. All good.

Then I checked all crontabs. I walked through all scheduled scripts that are set to run Sunday evening or Monday morning. Again simple tasks that I ran manually and they took <1 sec.

I checked for deadlocks but I knew that’s not the problem. If there was a deadlock, the whole DB whould have been frozen. So .. yeah. The last thing that came up on my mind – processlist of queries. That’s where pt-stalk (Percona utility) stepped in. That’s how I discovered where my issue came from. I had so many locked queries .. But never midn, I wrote the whole thing just to provide you with the script I used.

wget -O pt-stalk

chmod +x pt-stalk

mkdir -p /var/lib/pt-stalk/

/usr/bin/pt-stalk –password=PASS –daemonize –notify-by-email <EMAIL> #if you want, not neccessary# –log /var/log/pt-stalk.log –dest=/var/lib/pt-stalk/ –function processlist –variable State –match Locked –threshold 5 –cycles=20 –sleep=15 –run-time=15