Improving Database Response Time Using Index

I had an experience when our web application has taken time to load a page the gets data from database. It’s around 6 seconds before the user can see the page. When it was new, less records, everything was fast.

I did an audit to the code checking the time of every process looking for the part that is taking a lot of time. The problem was some queries are joining tables with large rows.

The solution was to create column indexes.

From around 6 seconds it is able to load around 0.200 ms now.

Consider query below:
SELECT users.* FROM users
JOIN addresses ON addresses.user_id = users.id
JOIN cities ON cities.id = addresses.city_id
JOIN provinces ON provinces.id = addresses.province_id

If it takes time to load you could consider indexing address table
ALTER TABLE addresses ADD INDEX(user_id,city_id,province_id)

Note, the order of column index in addresses table is important. First is user_id, followed by city_id then province_id as also shown in the query.

While indexing speeds the read of query, it also has negative impact on write query as indexes needs to be updated when a new row is added or modified.

Combine indexing and cache technology to further increase response time.

Unify www and non-www site

Create .htaccess inside document root and add
RewriteCond %{HTTP_HOST} ^www\.
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]

The first RewriteRule does permanent redirect (301) to traffics if it starts with www to https://example.com

The second RerwiteRule does permanent redirect (301) to traffics if it’s an http request to https://example.com

Www, non-www, https and non-https are treated different site by search engines.
So it’s important to unify them for SEO purposes.

Reference: https://moz.com/community/q/302-or-301-redirect-to-https

Serving web content from user’s home directory.

I had a problem getting "Forbidden You don't have permission to access /index.php on this server." after confirming all virtual host settings and folder permission are correct.

I’ve set $ sudo chown -R apache.apache /home/anthony/example.com and $ sudo chmod -R 755 /home/anthony/example.com but I still get the error.

When I looked /home, $ cd /home $ ls -l I saw the problem, anthony has 700 permission,
drwx------ 3 anthony anthony 115 Feb 7 22:54 anthony
so apache can’t get through anthony directory. My solution is to make anthony directory 750 sudo chmod 750 /home/anthony and add apache user to anthony group sudo usermod -a -G anthony apache

Then restart apache
sudo service httpd restart

That solved the problem and I can now see my website.

Below is my VirtualHost Config

<VirtualHost *:80>
    ServerAdmin email@example.com
    ServerName example.com
    DocumentRoot /home/anthony/example.com/public_html
    
    <Directory “/home/anthony/example.com”>
        AllowOverride None
        # Allow open access:
        Require all granted
    </Directory>
    
    <Directory “/home/anthony/example.com/public_html”>
        AllowOverride All
        Options Indexes FollowSymLinks
        Require all granted
    </Directory>

</VirtualHost>

Integrate Laravel and Chat Server using WebSocket

Create a file chat-server.php
<?php
use Ratchet\Server\IoServer;
use Ratchet\Http\HttpServer;
use Ratchet\WebSocket\WsServer;
use App\Http\Chat;
require 'vendor/autoload.php';
//load laravel framework
require __DIR__.'/bootstrap/autoload.php';
$app = require_once __DIR__.'/bootstrap/app.php';
$kernel = $app->make(Illuminate\Contracts\Http\Kernel::class);
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
$server = IoServer::factory(
new HttpServer(
new WsServer(
new Chat()
)
),
9090,
'0.0.0.0' //allow none-localhost ip
);
$server->run();

App\Http\Chat.php is a class that implements Ratchet\MessageComponentInterface. Detailed implementation is found here http://socketo.me/docs/hello-world

To run, open commandline and type $ php chat-server.php If you update the file and related to it make sure to restart the chat server using Ctrl+C then running the command again.

If you encounter “Could not find driver” while using chat-server.php that means there is no enabled mysql driver in php terminal. You can find solutions here http://codes.anthonyaraneta.com/php/could-not-find-driver-in-ubuntu-cli

Add Virtual Host to Xampp for example.com

In C:\xampp\apache\conf\extra\httpd-vhosts.cnf file add new VirtualHost block


DocumentRoot "C:\xampp\htdocs\app_folder"
ServerName example.com


DocumentRoot "C:/xampp/htdocs"
ServerName localhost

Then restart apache

Open hosts file, in Windows it’s located in C:\Windows\System32\drivers\etc (make sure to show all files) and append
127.0.0.1 example.com

Save and close hosts file

You should be able to browse http://example.com:8080/ which loads the content from your disk

DateTime() is better than strtotime()

I’m using Windows 64 bit and PHP 5.6

strtotime() can’t go beyond 2038 year

<?php
$number_of_years = 2;
$start_date = '2037-12-31';
$end_date = strtotime('+' . $number_of_years . ' year', strtotime($start_date));
$end_date = strtotime('-1 day', $end_date);
echo 'strtotime() Result: '.date('Y-m-d', $end_date);

echo "<br/>";

$d = new DateTime($start_date);
$d->modify("+".$number_of_years." year");
$d->modify("-1 day");
echo 'DateTime() Result: '.$d->format('Y-m-d');

Result:

strtotime() Result: 1969-12-31
DateTime() Result: 2039-12-30

Usage of DateTime() is also easier.

Run Cron Job on Intervals


#Minutes Hours Day of Month Month Day of Week Command
#0 to 59 0 to 23 1 to 31 1 to 12 0(Sunday) to 6(Saturday) Shell Command
#run every 15 minutes
*/15 * * * * curl http://example.com/controller/action >/dev/null 2>&1

#run every 3 hours
0 */3 * * * curl http://example.com/controller/action >/dev/null 2>&1

#run every after 2 days at 1AM
0 1 */2 * * curl http://example.com/controller/action >/dev/null 2>&1

#run every after 2 months on day 1
0 0 1 */2 * curl http://example.com/controller/action >/dev/null 2>&1

#run every Tuesday,Thursday,Saturday at 1AM
0 1 * * 2,4,6 curl http://example.com/controller/action >/dev/null 2>&1

>/dev/null 2>&1 suppresses the output of curl

Doing Daily Remote Backup

Scenario 1

I have 2 remote servers, Server1 is live and Server2 is backup.
Server1 is already live and cron job is not running. I don’t want to install cron job in Server1 for some reasons.
Server2 has cron job running and is going to connect to Server1 and execute the backup script daily-backup-script-v2.sh every 1 AM.

Open crontab in Server2
$ crontab -e
Press Insert button in the keyboard to edit.
#Minutes Hours Day of Month Month Day of Week Command
#0 to 59 0 to 23 1 to 31 1 to 12 0 to 6 Shell Command

Append the command to run
0 1 * * * ssh user@Server1 sh /mnt/extradisk/daily-backup-script-v2.sh
Press ESC button to quit editing, type :wq! press Enter to save changes and quit editing.

Below is the script for daily-backup-script-v2.sh
#!/bin/bash
#START
weekday=$(date +"weekday_%u")
file="/mnt/extradisk/backups/database_$weekday.sql.gz"
mysqldump -u user -ppassword --all-databases | gzip > $file
scp -P 10022 $file user@Server2:~/folder-daily-backups/
domain="/mnt/extradisk/backups/daily-backup-domains_$weekday.tar.gz"
tar -cpzf $domain -C / usr/share/glassfish3/glassfish/domains
scp -P 10022 $domain user@Server2:~/folder-daily-backups/
#END

The above script dumps all mysql databases and zip them into a file.
It also backups glassfish files and zip them.
Both zips are copied from Server1 to Server2 for remote backup.

Scenario 2

Both servers have running cron job.

Server1 is going to execute it’s backup script every after 3 hours
0 */3 * * * sh ~/backups/backup-script.sh

backup-script.sh code below
#!/bin/bash
#START
hour=$(date +"hour_%H")
file="/home/user/backups/database_$hour.sql.gz"
mysqldump -hipaddress -u user -ppassword database | gzip > $file
#END

Server2 will get Server1’s backups every 1 AM
0 1 * * * scp -P 10022 user@Server1:~/backups/* ~/BACKUPS/project/

Note: Server1 has Server2’s public key id_rsa.pub in its authorized_keys, vice versa.

$ date --help
to see more date formats

Cleaning Malicious Scripts Injected in WordPress PHP files


Note: This only works if the scripts are injected to the first line of every PHP files.

First backup the files
$ tar -zcvf public_html.infected.tar.gz public_html

Then go inside public_html to execute the code there
$ cd public_html

Find all files with .php extention and execute the sed command.
sed will do an infile search and replace. -i will backup the file to be edited and add .infected suffix.
'1 s/.*/<?php/' does search in first line from .* (means all of first line) and replace with <?php

Note: There’s a possibility that the site may function, so be ready to fix it.
The problem I encountered with this is that few php files that only have html contents are having errors bec of <?php in the first line.
Check the server log for details errors.

Another error I encountered from wordpress pages is that <?php get_header();?> in the first line is replaced with <?php

Execute the cleanup code below
$ find . -type f -name "*.php" -exec sed -i.infected '1 s/.*/<?php/' {} \;

Check the command has no adverse effect on the site.
After checking the site is still working, delete infected files

$ find . -type f -name "*.infected" -delete