Keep Linux Command Running Even After Closing Shell

Normally when you run linux command in shell, commands will stop once your close the terminal.
To keep it running, use nohup. The & will make it run in background.

$ nohup command &
$ disown

If the command will listen on certain port, check if it’s listening
netstat -ap | grep "LISTEN"

tcp 0 0 *:ssh *:* LISTEN -
tcp 0 0 *:9090 *:* LISTEN 22018/php
tcp 0 0 *:9191 *:* LISTEN 22023/php
tcp 0 0 localhost:mysql *:* LISTEN -
tcp6 0 0 [::]:http [::]:* LISTEN -
tcp6 0 0 [::]:ssh [::]:* LISTEN -

Ubuntu: Restrict Access to PHPMyAdmin

Add this directives if you want to allow only 8.8.8.8 to view your phpmyadmin
Order Deny,Allow
Deny from All
Allow from 8.8.8.8

to
/etc/phpmyadmin/apache.conf

<Directory /usr/share/phpmyadmin>
        Options FollowSymLinks
        DirectoryIndex index.php
        #allow access to specific IP
        Order Deny,Allow
        Deny from All
        Allow from 8.8.8.8

        <IfModule mod_php5.c>
                AddType application/x-httpd-php .php

                php_flag magic_quotes_gpc Off
                php_flag track_vars On
                php_flag register_globals Off
                php_admin_flag allow_url_fopen Off
                php_value include_path .
                php_admin_value upload_tmp_dir /var/lib/phpmyadmin/tmp
                php_admin_value open_basedir /usr/share/phpmyadmin/:/etc/phpmyadmin/:/var/lib/phpmyadmin/:/usr/share/php/php-gettext/:/usr/share/javascript/
        </IfModule>

</Directory>

then restart apache
sudo service apache2 restart

Unify www and non-www site

Create .htaccess inside document root and add
RewriteCond %{HTTP_HOST} ^www\.
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

The first RewriteRule does permanent redirect (301) to traffics if it starts with www to https://example.com

The second RerwiteRule does permanent redirect (301) to traffics if it’s an http request to https://example.com

Www, non-www, https and non-https are treated different site by search engines.
So it’s important to unify them for SEO purposes.

Reference: https://moz.com/community/q/302-or-301-redirect-to-https

Serving web content from user’s home directory.

I had a problem getting "Forbidden You don't have permission to access /index.php on this server." after confirming all virtual host settings and folder permission are correct.

I’ve set $ sudo chown -R apache.apache /home/anthony/example.com and $ sudo chmod -R 755 /home/anthony/example.com but I still get the error.

When I looked /home, $ cd /home $ ls -l I saw the problem, anthony has 700 permission,
drwx------ 3 anthony anthony 115 Feb 7 22:54 anthony
so apache can’t get through anthony directory. My solution is to make anthony directory 750 sudo chmod 750 /home/anthony and add apache user to anthony group sudo usermod -a -G anthony apache

Then restart apache
sudo service httpd restart

That solved the problem and I can now see my website.

Below is my VirtualHost Config

<VirtualHost *:80>
    ServerAdmin email@example.com
    ServerName example.com
    DocumentRoot /home/anthony/example.com/public_html
    
    <Directory “/home/anthony/example.com”>
        AllowOverride None
        # Allow open access:
        Require all granted
    </Directory>
    
    <Directory “/home/anthony/example.com/public_html”>
        AllowOverride All
        Options Indexes FollowSymLinks
        Require all granted
    </Directory>

</VirtualHost>

Run Cron Job on Intervals


#Minutes Hours Day of Month Month Day of Week Command
#0 to 59 0 to 23 1 to 31 1 to 12 0(Sunday) to 6(Saturday) Shell Command
#run every 15 minutes
*/15 * * * * curl http://example.com/controller/action >/dev/null 2>&1

#run every 3 hours
0 */3 * * * curl http://example.com/controller/action >/dev/null 2>&1

#run every after 2 days at 1AM
0 1 */2 * * curl http://example.com/controller/action >/dev/null 2>&1

#run every after 2 months on day 1
0 0 1 */2 * curl http://example.com/controller/action >/dev/null 2>&1

#run every Tuesday,Thursday,Saturday at 1AM
0 1 * * 2,4,6 curl http://example.com/controller/action >/dev/null 2>&1

>/dev/null 2>&1 suppresses the output of curl

Doing Daily Remote Backup

Scenario 1

I have 2 remote servers, Server1 is live and Server2 is backup.
Server1 is already live and cron job is not running. I don’t want to install cron job in Server1 for some reasons.
Server2 has cron job running and is going to connect to Server1 and execute the backup script daily-backup-script-v2.sh every 1 AM.

Open crontab in Server2
$ crontab -e
Press Insert button in the keyboard to edit.
#Minutes Hours Day of Month Month Day of Week Command
#0 to 59 0 to 23 1 to 31 1 to 12 0 to 6 Shell Command

Append the command to run
0 1 * * * ssh user@Server1 sh /mnt/extradisk/daily-backup-script-v2.sh
Press ESC button to quit editing, type :wq! press Enter to save changes and quit editing.

Below is the script for daily-backup-script-v2.sh
#!/bin/bash
#START
weekday=$(date +"weekday_%u")
file="/mnt/extradisk/backups/database_$weekday.sql.gz"
mysqldump -u user -ppassword --all-databases | gzip > $file
scp -P 10022 $file user@Server2:~/folder-daily-backups/
domain="/mnt/extradisk/backups/daily-backup-domains_$weekday.tar.gz"
tar -cpzf $domain -C / usr/share/glassfish3/glassfish/domains
scp -P 10022 $domain user@Server2:~/folder-daily-backups/
#END

The above script dumps all mysql databases and zip them into a file.
It also backups glassfish files and zip them.
Both zips are copied from Server1 to Server2 for remote backup.

Scenario 2

Both servers have running cron job.

Server1 is going to execute it’s backup script every after 3 hours
0 */3 * * * sh ~/backups/backup-script.sh

backup-script.sh code below
#!/bin/bash
#START
hour=$(date +"hour_%H")
file="/home/user/backups/database_$hour.sql.gz"
mysqldump -hipaddress -u user -ppassword database | gzip > $file
#END

Server2 will get Server1’s backups every 1 AM
0 1 * * * scp -P 10022 user@Server1:~/backups/* ~/BACKUPS/project/

Note: Server1 has Server2’s public key id_rsa.pub in its authorized_keys, vice versa.

$ date --help
to see more date formats

Cleaning Malicious Scripts Injected in WordPress PHP files


Note: This only works if the scripts are injected to the first line of every PHP files.

First backup the files
$ tar -zcvf public_html.infected.tar.gz public_html

Then go inside public_html to execute the code there
$ cd public_html

Find all files with .php extention and execute the sed command.
sed will do an infile search and replace. -i will backup the file to be edited and add .infected suffix.
'1 s/.*/<?php/' does search in first line from .* (means all of first line) and replace with <?php

Note: There’s a possibility that the site may function, so be ready to fix it.
The problem I encountered with this is that few php files that only have html contents are having errors bec of <?php in the first line.
Check the server log for details errors.

Another error I encountered from wordpress pages is that <?php get_header();?> in the first line is replaced with <?php

Execute the cleanup code below
$ find . -type f -name "*.php" -exec sed -i.infected '1 s/.*/<?php/' {} \;

Check the command has no adverse effect on the site.
After checking the site is still working, delete infected files

$ find . -type f -name "*.infected" -delete