Select Page
Download Apple Releases Public Beta of iOS 10

Download Apple Releases Public Beta of iOS 10

Apple has just released a public beta of iOS 10 that anyone can download. To get the beta you must register in the company’s Beta Software Program.

iOS 10 brings numerous new features including a huge update to Messages that delivers more ways to message, like stickers and full-screen effects; the ability for Siri to do more by working with apps; redesigned Maps, Photos, Apple Music and News apps; and the Home app for managing home automation products.

You can learn more about iOS 10 here. Hit the link below to download iOS 10 or sign up for beta access.

Basic Linux Commands List

Basic Linux Commands List

Here are many common Linux commands that will be helpful to you, if you ever even use the command line interface in Linux. Most average users just use the graphical user interface instead which usually has many tools and front-ends to Linux common commands. This Linux tutorial on command commands will help even the average user in case X server crashes, fails, is not properly configured, etc. So continue reading for some of the more common Linux bash commands.

  • ls Displays everything in the current directory
  • ls -a Displays all files, including hidden
  • ls -l Displays all files, along with the size and timestamp
  • tar -zxpf Uncompresses tar.gz files
  • tar -xpf Uncompresses .tar files
  • gunzip Uncompresses .gz files
  • cp /path/to/old /path/to/new Copies a file to a new file
  • mv /path/to/old /path/to/new Moves a file to a new file, or rename
  • mkdir Creates a directory
  • rmdir Deletes a directory
  • rm Deletes a file
  • rm -rf Deletes a directory
  • cd /path/to/dir Moves to a directory
  • cd .. Move up one directory
  • cd ~ Moves to your home directory
  • cd – Moves to the previous directory
  • pwd Displays the present working directory (the one you’re in)
  • pico Edits a file
  • ftp Connect to a FTP server
  • lynx View a webpage
  • df Displays the hard drive stats
  • quota Displays your quota
  • uptime Displays the uptime of the server
  • uname -a Displays the operating system stats
  • whoami Displays your info
  • who Displays others connected to the server
  • last Displays the last login
  • whereis Tells where a file is located
  • BitchX IRC Client
  • mail Check your email
  • ps -x Displays processes your running
  • ps -a Displays all processes running
  • ps -ux Displays running processes, with CPU/Memory usage
  • kill pid# Kills a process
  • kill -9 pid# Kills an eggdrop process
  • killall proc_name Kills all running process of the same type
  • whatis Description of commands
  • man command Displays help on the command (manual)
  • nano Same as Pico (Use yum install nano if it doesn’t first work)
  • Top – gives an overall view of what is going on with the server including memory usage, serve load and running processes “q” to exit top
  • sar -q gives a report of the process list, 1 minute and 5 minute average load every 10 minutes since midnight server time
  • tar -zcf filename.tar.gz file Tars up the file or directory of your choice, replace filename.tar.gzwith the name you want your tar file to have…with the tar.gz extension on the end and replace file with the file or directory you want to tar up. Can also use a path/to/file for both.
  • updatedb – Updates the locate/search DB.


service servicename restart

Stop a service:
service servicename stop

Start a service:
service servicename start

Status (doesn’t work on all):
service servicename status

On a RedHat CPanel server, here are the useful services: (CentOS, x10′s default OS for VPSs, is a stripped-down RedHat OS.)


Root crontab: (can be used by any user with crontab permissions to edit their crontab. If you are running this as “root” it will edit root’s crontab, and the same goes for any other user. When “bob” runs crontab -e, he will edit his own crontab and not root’s, though he can only edit his own crontab if he has permissions.)
crontab -e

To edit a users cron jobs: (run as a super-user, such as root. not available to regular users.)

crontab -u username -e

Replace username with the actual username of the client you want to edit.

(We’re still talking about RedHat [CentOS] that is running cPanel below. You can do most, if not all, of this from the WHM, so feel free to skip ahead a bit. :P )

  • /scripts/adddns Add a Dns Entry
  • /scripts/addfpmail Install Frontpage Mail Exts
  • /scripts/addservlets Add JavaServlets to an account (jsp plugin required)
  • /scripts/adduser Add a User
  • /scripts/admin Run WHM Lite
  • /scripts/apachelimits Add Rlimits (cpu and mem limits) to apache.
  • /scripts/dnstransfer Resync with a master DNS Server
  • /scripts/editquota Edit A User’s Quota
  • /scripts/finddev Search For Trojans in /dev
  • /scripts/findtrojans Locate Trojan Horses
  • Suggested Usage:
  • /scripts/findtrojans < /var/log/trojans
  • /scripts/fixtrojans < /var/log/trojans
  • /scripts/fixcartwithsuexec Make Interchange work with suexec
  • /scripts/fixinterchange Fix Most Problems with Interchange
  • /scripts/fixtrojans Run on a trojans horse file created by findtrojans to remove them
  • /scripts/fixwebalizer Run this if a user’s stats stop working
  • /scripts/fixvaliases Fix a broken valias file
  • /scripts/hdparamify Turn on DMA and 32bit IDE hard drive access (once per boot)
  • /scripts/initquotas Re-scan quotas. Usually fixes Disk space display problems
  • /scripts/initsuexec Turn on SUEXEC (probably a bad idea)
  • /scripts/installzendopt Fetch + Install Zend Optimizer
  • /scripts/ipusage Display Ipusage Report
  • /scripts/killacct Terminate an Account
  • /scripts/killbadrpms Delete \”Security Problem Infested RPMS\”
  • /scripts/mailperm Fix Various Mail Permission Problems
  • /scripts/mailtroubleshoot Attempt to Troubleshoot a Mail Problem
  • /scripts/mysqlpasswd Change a Mysql Password
  • /scripts/quicksecure Kill Potential Security Problem Services
  • /scripts/rebuildippool Rebuild Ip Address Pool
  • /scripts/remdefssl Delete Nasty SSL entry in apache default httpd.conf
  • /scripts/restartsrv Restart a Service (valid services: httpd,proftpd,exim,sshd,cppop,bind,mysql)
  • /scripts/rpmup Syncup Security Updates from RedHat/Mandrake
  • /scripts/runlogsnow Force a webalizer/analog update.
  • /scripts/secureit Remove non-important suid binaries
  • /scripts/setupfp4 Install Frontpage 4+ on an account.
  • /scripts/simpleps Return a Simple process list. Useful for finding where cgi scripts are running from.
  • /scripts/suspendacct Suspend an account
  • /scripts/sysup Syncup Cpanel RPM Updates
  • /scripts/ulimitnamed RH 6 only. Install a version of bind to handle many many zones.
  • /scripts/unblockip Unblock an IP
  • /scripts/unsuspendacct UnSuspend an account
  • /scripts/upcp Update Cpanel
  • /scripts/updatenow Update /scripts
  • /scripts/wwwacct Create a New Account

Delete MRTG

rpm -e –nodeps `rpm -qa|grep mrtg`

Empty /tmp folder

rm -R -f /tmp/c*
rm -R -f /tmp/s*
rm -R -f /tmp/p*
rm -R -f /tmp/*_*
rm -R -f /tmp/*-*

netstat -n -p
Useful to see who is connected to your server, this also resolves hostnames to IP addresses and the -p switch shows you what each person connected is doing and provides a PID for it if there is one… useful if you need to kill something

find / -user username
Replace username with a username of one of your account to find all the files that belong to them. Also useful to add the |more switch so you can scroll one screen at a time. Ever have a client who seems to show a lot more files than are actually in their home directory? This is how you find those files and fix them. Common problem is cpmove files that don’t get properly deleted and get added to a users account.

/scripts/pkgacct2 username
Replace username with a user on your system. This should be done from the home directory. Useful for manually backing up an account if whm copy account doesn’t work. Then just move (mv) the file to a home directory accessible via the web and
chown user.user filename
and chmod to 750 or 755 and you can wget it from a different server if need be.

/scripts/restorepkg username
Once you’ve got the file and need to unpack it you use this command. The file should be in the /home directory to use this though. Remember folks…. username…. not cpmove-username.tar.gz

crontab -e
edit the crontab file and see what is set to run in there.

–help (add to end of the command following a single space)
Such as tar –help, similar to man it digs up info on any given command.

tail -10 filename
gives you the last 10 lines of a file. Can change the # to whatever you want.

cp -R FileOrDirectory path/to/destination
the -R allows you to copy an entire directory to somewhere else.

kill -9
not just for eggdrops… it’s called a “hard kill” and handy for killing off any stubborn process that refuses to die.

whereis filename (use the * as a wildcard or for broader search)
can also use locate or find (although locate is faster)

not just for killing programs.. you can also killall to kill all processes being run by a user. Handy if you have an abuser eating up system resources.

Facebook style time ago function PHP

Facebook style time ago function PHP

Here i have added that function snippet.It quite useful for social networking web can use this function to your helper component .


function time_elapsed_string($datetime, $full = false) {
        $today = time();    
                 $createdday= strtotime($datetime); 
                 $datediff = abs($today - $createdday);  
                 $years = floor($datediff / (365*60*60*24));  
                 $months = floor(($datediff - $years * 365*60*60*24) / (30*60*60*24));  
                 $days = floor(($datediff - $years * 365*60*60*24 - $months*30*60*60*24)/ (60*60*24));  
                 $hours= floor($datediff/3600);  
                 $minutes= floor($datediff/60);  
                 $seconds= floor($datediff);  
                 //year checker  
                    $difftext=$years." years ago";  
                    $difftext=$years." year ago";  
                 //month checker  
                    $difftext=$months." months ago";  
                    $difftext=$months." month ago";  
                 //month checker  
                    $difftext=$days." days ago";  
                    $difftext=$days." day ago";  
                 //hour checker  
                    $difftext=$hours." hours ago";  
                    $difftext=$hours." hour ago";  
                 //minutes checker  
                    $difftext=$minutes." minutes ago";  
                    $difftext=$minutes." minute ago";  
                 //seconds checker  
                    $difftext=$seconds." seconds ago";  
                    $difftext=$seconds." second ago";  
                 return $difftext;  


you can call this function with date time parameter

time_elapsed_string('2013-10-20 17:15:20')


Browse Files and Folders with Node.js

Browse Files and Folders with Node.js

Watching a file or directory for changes is an important part of automation.  We all enjoy using our favorite CSS preprocessor’s “watch” feature — we can still refresh the page and see our changes as though we were simply writing in pure CSS.  Node.js makes both file and directory watching easy — but it’s a bit more difficult than you may think.

Simply put:  Node.js’ watching features aren’t consistent or performant yet, which thedocumentation admits.  The good news:  a utility called chokidar stabilizes file watching and provides added insight into what has happened.  chokidar provides a wealth of listeners;  instead of providing boring reduced examples, here’s what chokidar provides you:

var chokidar = require('chokidar');

var watcher ='file, dir, or glob', {
  ignored: /[\/\\]\./, persistent: true

var log = console.log.bind(console);

  .on('add', function(path) { log('File', path, 'has been added'); })
  .on('addDir', function(path) { log('Directory', path, 'has been added'); })
  .on('change', function(path) { log('File', path, 'has been changed'); })
  .on('unlink', function(path) { log('File', path, 'has been removed'); })
  .on('unlinkDir', function(path) { log('Directory', path, 'has been removed'); })
  .on('error', function(error) { log('Error happened', error); })
  .on('ready', function() { log('Initial scan complete. Ready for changes.'); })
  .on('raw', function(event, path, details) { log('Raw event info:', event, path, details); })

// 'add', 'addDir' and 'change' events also receive stat() results as second
// argument when available:
watcher.on('change', function(path, stats) {
  if (stats) console.log('File', path, 'changed size to', stats.size);

// Watch new files.
watcher.add(['new-file-2', 'new-file-3', '**/other-file*']);

// Un-watch some files.

// Only needed if watching is `persistent: true`.

// One-liner
require('chokidar').watch('.', {ignored: /[\/\\]\./}).on('all', function(event, path) {
  console.log(event, path);

What a wealth of handles, especially when you’ve experienced the perils of `fs` watch functionality.  File watching is essential to seamless development and chokidar makes life easy!

Building Your First Desktop App With HTML,Node-WebKit and JS

Building Your First Desktop App With HTML,Node-WebKit and JS

These days you can do pretty much anything with JavaScript and HTML. Thanks to Node-WebKit, we can even create desktop applications that feel native, and have full access to every part of the operating system. In this short tutorial, we will show you how to create a simple desktop application using Node-WebKit, which combines jQuery and a few Node.js modules.

Node-WebKit is a combination of Node.js and an embedded WebKit browser. The JavaScript code that you write is executed in a special environment and has access to both standard browser APIs and Node.js. Sounds interesting? Keep reading!

Installing Node-WebKit

For developing applications, you will need to download the node-webkit executable, and call it from your terminal when you want to run your code. (Later you can package everything in a single program so your users can only click an icon to start it).

Head over to the project page and download the executable that is built for your operating system. Extract the archive somewhere on your computer. To start it, you need to do this in your terminal:

# If you are on linux/osx

/path/to/node-webkit/nw /your/project/folder

# If you are on windows

C:\path\to\node-webkit\nw.exe C:\your\project\folder

# (the paths are only for illustrative purposes, any folder will do)

This will open a new node-webkit window and print a bunch of debug messages in your terminal.

You can optionally add the extracted node-webkit folder to your PATH, so that it is available as the nw command from your terminal.

Your First Application

There is a Download button near the top of this article. Click it and get a zip with a sample app that we prepared for you. It fetches the most recent articles on Tutorialzine from our RSS feed and turns them into a cool looking 3D carousel usingjQuery Flipster.


Directory Structure

Once you extract it, you will see the files above. From here this looks like a standard static website. However, it won’t work if you simply double click index.html – it requires Node.js modules, which is invalid in a web browser. To run it, CD into this folder, and try running the app with this command:

/path/to/node-webkit/nw .

This will show our glorious desktop app.


Our node-webkit app

How it was made

It all starts with the package.json file, which node-webkit looks up when starting. It describes what node-webkit should load and various parameters of the window.


  "name": "nw-app",
  "version": "1.0.0",
  "description": "",
  "main": "index.html",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  "author": "",
  "window": {
    "toolbar": false,
    "width": 800,
    "height": 500
  "license": "ISC",
  "dependencies": {
    "pretty-bytes": "^1.0.2"

The window property in this file tells node-webkit to open a new window 800 by 500px and hide the toolbar. The file pointed to by the main property will be loaded. In our case this is index.html:


<!DOCTYPE html>

    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1">

    <title>Tutorialzine Node-Webkit Experiment</title>

    <link rel="stylesheet" href="./css/jquery.flipster.min.css">
    <link rel="stylesheet" href="./css/styles.css">


    <div class="flipster">
            <!-- Tutorialzine's latest articles will show here -->

    <p class="stats"></p>

    <script src="./js/jquery.min.js"></script>
    <script src="./js/jquery.flipster.min.js"></script>
    <script src="./js/script.js"></script>

And finally, here is our JavaScript file. This is where it gets interesting!


// Mixing jQuery and Node.js code in the same file? Yes please!


    // Display some statistic about this computer, using node's os module.

    var os = require('os');
    var prettyBytes = require('pretty-bytes');

    $('.stats').append('Number of cpu cores: <span>' + os.cpus().length + '</span>');
    $('.stats').append('Free memory: <span>' + prettyBytes(os.freemem())+ '</span>');

    // Node webkit's native UI library. We will need it for later
    var gui = require('nw.gui');

    // Fetch the recent posts on Tutorialzine

    var ul = $('.flipster ul');

    // The same-origin security policy doesn't apply to node-webkit, so we can
    // send ajax request to other sites. Let's fetch Tutorialzine's rss feed:

    $.get('', function(response){

        var rss = $(response);

        // Find all articles in the RSS feed:

            var item = $(this);
            var content = item.find('encoded').html().split('</a></div>')[0]+'</a></div>';
            var urlRegex = /(http|ftp|https):\/\/[\w\-_]+(\.[\w\-_]+)+([\w\-\.,@?^=%&amp;:/~\+#]*[\w\-\@?^=%&amp;/~\+#])?/g;

            // Fetch the first image of the article
            var imageSource = content.match(urlRegex)[1];

            // Create a li item for every article, and append it to the unordered list

            var li = $('<li><img /><a target="_blank"></a></li>');

                .attr('href', item.find('link').text())

            li.find('img').attr('src', imageSource);



        // Initialize the flipster plugin

            style: 'carousel'

        // When an article is clicked, open the page in the system default browser.
        // Otherwise it would open it in the node-webkit window which is not what we want.

        $('.flipster').on('click', 'a', function (e) {

            // Open URL with default browser.




Notice that we are accessing Tutorialzine’s RSS feed directly with jQuery, even though it is on a different domain. This is not possible in a browser, but Node-WebKit removes this limitation to make development of desktop applications easier.

Here are the node modules we’ve used:

  • Shell – A node webkit module that provides a collection of APIs that do desktop related jobs.
  • OS – The built-in Node.js OS module, which has a method that returns the amount of free system memory in bytes.
  • Pretty Bytes – Convert bytes to a human readable string: 1337 → 1.34 kB.

Our project also includes jQuery and the jQuery-flipster plugin, and that’s pretty much it!

Packaging and Distribution

You most certainly don’t want your users to go through the same steps in order to run you application. You wan’t to package it in a standalone program, and open it by simply double clicking it.

Packaging node-webkit apps for multiple operating systems takes a lot of work to do manually. But there are libraries that do this for you. We tried this npm module –, and it worked pretty well.

The only disadvantage is that the executable files have a large size (they can easily hit 40-50mb) , because they pack a stripped down webkit browser and node.js together with your code and assets. This makes it rather impractical for small desktop apps (such as ours), but for larger apps it is worth a look.


Node-webkit is a powerful tool that opens a lot of doors to web developers. With it, you can easily create companion apps for your web services and build desktop clients which have full access to the users’s computer.

You can read more about node-webkit on their wiki.

Node.js: Five Things Every PHP Developer Should Know

Node.js: Five Things Every PHP Developer Should Know

I recently started working on a few Node.js applications. Coming most recently from PHP (and Drupal in particular), I found the transition to Node.js to be surprisingly easy. Pleasurable, in fact. But I had to learn to think differently about a few things.

Below I list the five things I think every PHP developer should know about Node.js.

1. Node.js Is Built On Chrome’s JavaScript Engine

Google’s browser, Chrome, has a notoriously fast JavaScript engine called V8. And this JavaScript engine can be cleanly separated from the web browser. Node.js is built on V8. This is one of the main reasons why Node.js is so fast.

This has several positive implications for you, the developer:

  • You don’t need to learn a new “dialect” of JavaScript. I find myself referencing Chrome’s and Mozilla’s JS documentation all the time, because Node works the same way.
  • With V8’s JIT (Just In Time) compiling, apps run at near-native speeds (benchmarks indicate it’s much faster than PHP and Ruby, in terms of running analogous computational tasks).
  • As V8 improves, Node will too.

2. Node.js Isn’t (Just) A Web Server or Platform

Unlike PHP, Node.js is not “web centric” (yes, you can run CLI apps in PHP, but that wasn’t the original intent). Node.js is a general-purpose JavaScript runtime with a host of powerful libraries — one of which happens to provide an HTTP/HTTPS server implementation.

But you can do much more with Node. It is easy to build command line clients and other TCP/IP servers.

On the one hand, this is great news. Node.js is so flexible.

On the other hand, since Node.js isn’t HTTP-centric, you may find yourself having to implement code to do things once provided for you by the framework. In other words, in node, there is no$_GET.

3. Node.js Is Object-Oriented (In That Weird JavaScript Way)

I love jQuery. But it’s made me lazy. It’s made it very easy to write quick and dirty scripts without thinking about architecture. When using JavaScript for a few pieces of browser bling, perhaps this isn’t a bad thing.

But Node’s clearly not about browser bling. It’s about application building. Which means architecture. When you write code in Node.js, you’re going to want to get neck-deep in JavaScripts prototypal object model.

Having that strong 10-years-in-Java background, I thought that JavaScript’s weird prototype system would drive me crazy. And sometimes it does. But surprisingly, I’m falling in love with it. Node.js (and NPM, the amazing Node Package Manager) make such good use of the prototypal JavaScript system that merely writing code “like they do” helped me clear many of the hurdles that my Class/Interface mind thought would be hard to grok.

4. Evented I/O?

Now we’re to the most controversial aspect of Node.js. Node itself runs in one thread. ONE! (Compare this to your typical Apache/PHP system where a dozen or more PHP instances are running at once.) Yet somehow it is fast and efficient.

What’s the secret? Sharing execution time, and offloading intensive IO processes to other threads.

I could go off on a long jargon-filled tangent about the benefits and drawbacks of “evented I/O”, but instead I’ll stick to the practical: When writing in Node.js, you need to think a little harder about whether your task is slow (and I/O bound) or fast. Use asynchronous functions with callbacks or event handlers for the slow work.

The important thing is to make sure that your application code doesn’t allow one request to monopolize the main Node process for too long without giving opportunities for other requests to get some work done.

5. Package Management is a Must!

Be honest. Do you love PEAR? Do you turn almost all of your code into PEAR or PECL packages? Not that many PHP developers do (and a surprising number of them don’t even know what PEAR packages are!).

You probably don’t want to carry that mentality over to Node.js.

  • Node.js is designed to be a minimalistic framework. 90% of the stuff you find in PHP’s core will not be present in Node.js’s core. Need an example or two? Database drivers? Not in Node’s core. Mail libraries? Not in Node’s core. HTML support? Not in Node’s core.
  • But a modular architecture is in Node’s core. And you will use it because it is awesome.
  • The npm tool (Node Package Manager) is the second thing you should download — right after Node. With it, a world of Node.js libraries will be available to you. Drivers, parsers, formatters, servers… there are thousands of packages.
  • Building and publishing your own packages is dead simple. I released my first one only a few days after starting with Node. It’s just that easy.

If you’re a Drupal developer, you can think about Node’s packaging system as something similar to Drupal modules — but with the developer (and not site-builder) in mind.

How To install vsftpd on CentOS 6

How To install vsftpd on CentOS 6

The first two letters of vsftpd stand for “very secure” and the program was built to have strongest protection against possible FTP vulnerabilities.

Step One—Install vsftpd

You can quickly install vsftpd on your virtual private server in the command line:

We also need to install the FTP client, so that we can connect to an FTP server:

sudo yum install ftp

Once the files finish downloading, vsftpd will be on your VPS. Generally speaking, the virtual private server is already configured with a reasonable amount of security. However, it does provide access to anonymous users.

Step Two—Configure VSFTP

Once VSFTP is installed, you can adjust the configuration.

Open up the configuration file:

sudo vi /etc/vsftpd/vsftpd.conf

One primary change you need to make is to change the Anonymous_enable to No:


Prior to this change, vsftpd allowed anonymous, unidentified users to access the VPS’s files. This is useful if you are seeking to distribute information widely, but may be considered a serious security issue in most other cases. After that, uncomment the local_enable option, changing it to yes.


Finish up by uncommenting command to chroot_local_user. When this line is set to Yes, all the local users will be jailed within their chroot and will be denied access to any other part of the server.


Finish up by restarting vsftpd:

sudo service vsftpd restart

In order to ensure that vsftpd runs at boot, run chkconfig:

chkconfig vsftpd on

Step Three—Access the FTP server

Once you have installed the FTP server and configured it to your liking, you can now access it.

You can reach an FTP server in the browser by typing the domain name into the address bar and logging in with the appropriate ID. Keep in mind, you will only be able to access the user’s home directory.

Alternatively, you can reach the FTP server through the command line by typing:


Then you can use the word, “exit,” to get out of the FTP shell.

Source : Digitalocean

How To Setup Virtual Host (Server Block) For Nginx On Ubuntu 14.04

How To Setup Virtual Host (Server Block) For Nginx On Ubuntu 14.04

Here’s a brief tutorial that shows you how to create a virtual host or server block on Nginx web server. Virtual Host is a term used with Apache2 to host multiple websites on a single web server.

Nginx on the other hand calls it Server Block. So to Nginx it’s called Server Block and Apache2, it’s called Virtual Host. So instead of running a single website on a single web server, virtual hosting allows for one web server to host multiple websites with different domain names in separate containers.

That’s what this short guide is going to show you. Websites hosted in virtual environments will need separate root directories to host each website content. Each website also will have its own configurations files which control how the website functions and that’s the beauty of implementing virtual hosting with web servers.

To get started with implementing virtual server blocks on Nginx, continue below.


Install Nginx on Ubuntu 14.04

First install Nginx web server. To do that on Ubuntu 14.04, run the commands one

sudo apt-get update && sudo apt-get install nginx

Creating Virtual Directory

The next step is to create separate virtual directories for each website. Since Nginx default path is at /var/www/, we’re going to be creating our directories in there.

Create a virtual directory for a website called

sudo mkdir -p /var/www/html/

Content for domain will live in the /var/www/html/ directory. You can create as many you like, just keep them separate.

The next thing is granting the appropriate ownership to Nginx webserver. To change the ownership of the directory to Nginx, run the commands below.

sudo chown -R www-data:www-data /var/www/html/

Next, change the permissions on the directory so Nginx can function correctly.

sudo chmod -R 755 /var/www/html

Creating a test page
Now you can create a test index page and place it in the vhost folder of to verify if the virtual host is working since there’s nothing there. Copy and page the code below into a new file called index.html

This is my test page

Configuring the virtual host
The next step is defining the virtual host perimeter in its configuration file. Virtual host configuration files are used to control how the virtual webserver functions and operates.

In this file is where you define the document root directory, control access rights, define the server name, admin email address and more. The configuration file is very important.

When you install Nginx on Ubuntu, a default configuration file with the basic settings is created. This file is there to verify that Nginx is working after you browse to the host.

So, we’re going to make a copy of the default configuration file to create the domain configuration file. To do that, run the commands below.

sudo cp /etc/nginx/sites-available/default /etc/nginx/sites-available/

Next, open the new configuration file for and define the configurations for the virtual site.

sudo vi /etc/nginx/sites-available/

Configure the file

server {
listen 80 default_server;
#listen [::]:80 default_server ipv6only=on;
root /var/www/html/;
index index.html index.htm;

This is the basic configuration just to confirm that the virtual host is up. For more detailed configuration of Nginx, do a search on this site for Nginx configuration.

Save the file.

Finally, run the commands below to enable the site by creating a symbolic link to the sites-enabled directory.

sudo ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/

Run the commands below to remove the default enabled site to prevent duplicate entry for the default IP/Port

sudo rm /etc/nginx/sites-enabled/default

Restart Nginx and test the site

sudo service nginx restart

If you’re running this test locally, create a host entry on your local computer for with IP address of the server.

Do this for as many virtual websites as you wish, just make sure to define the unit IP addresses.

Features and benefits of zend framework

Features and benefits of zend framework

Experienced developers claim that using ZendFramework developing web-applications is a better approach than writing your own code. Why? Let’s look at.

Proof of compliance with standards and use the best programming practices:

One of the features of PHP is that it is not certain imprisons developer coding standards. Every experienced PHP-developer eventually comes to their own style of coding and design texts programme. I understand someone else’s program often cause trouble, which is important for the coordinated work of the project team. Relative “softness» PHP to the free coding method sometimes leads to “poor” and potentially vulnerable code.

ZendFramework avoids this danger by offering developers a set of libraries written in compliance with today’s best PHP-programming techniques. The framework offers a standard layout of project files, provides turnkey solutions the most common problems arising in web-programming – is cleaning and checking of input data, etc.. Thus, the construction project on the framework, this leads to the creation of more high-quality code and more secure applications. In addition, it should be noted good documentation ZendFramework, allowing hassle-free to add to the project team for new developers at any stage of its implementation.


ZendFramework implemented with full support for the new object model in PHP version 5.x The architecture of this model, which is based on OOP (Object Oriented Programming), encourages developers to write programs based on the reuse of code, thus reducing the time to write duplicate code. It is important for web-applications that have multiple interfaces for data exchange. For example, if you need to add to an existing application in the search interface based on XML, optionally repeating the code logic controller available – the process of adding new functionality to ZendFramework is simple and transparent.


ZendFramework designed for building applications on the Internet. This means that such an application can take advantage of anyone, ie people living in different countries, speak different languages, use different formats in the notation of the date, time and currency. There was a time when the desire to write the developer “friendly” site for visitors from different countries were significant effort and headache. With ZendFramework this can not worry – component ZendLocale language setting controls the component responsible for ZendTranslate multilingual and work with Latin characters, Chinese and other scripts, components and ZendDate ZendCurrency responsible for localized formatting of dates, times, and currency.

Open source

In the financial support of the project the company is actively involved ZendFramework ZendTechnologies. But despite this, ZendFramework is an open source and developed mainly large group of volunteers that fix bugs and add new features. ZendTechnologies officially determines the direction and development of a group of “leading developers” that produce the final product functionality. Thus, the framework is available for use without paying license fees or the need to purchase additional hardware or software.

Extensive community support

Your project using ZendFramework, can easily integrate a Flickr photo galleries or simply maps GoogleMaps-use components and Zend_Service_Flickr Zend_Gdata. Interaction with Flash-application that uses the format AMF (ActionMessageFormat) of Adobe performed using component Zend_Amf. Ability to organize RSS-subscription newsletter legkorealizuetsya component Zend_Feed.

These features are mentioned in order to point out one of the attractive features ZendFramework-use creative forces of hundreds of experienced developers worldwide. ZendFramework includes a plurality of independent components that are used by developers to quickly add new features to your PHP-proekty.Ctochki cost time and effort, this technique of the project is much more preferable in comparison with writing your own code.

Google Sitelinks search box within the search results

Google Sitelinks search box within the search results

Starting today, Google has announced that you’ll be seeing a new and improved sitelinks search box. This new search box is designed to make it easier for users to reach specific content on your site, directly through your own site-search pages.

Google points out that when When users search for a company by name they may really be searching for something specific to that company’s website. Previously, when Google’s algorithms detected this, they would display a larger set of sitelinks and an additional search box below that search result. That search box which let users do site: searches over the site straight from the results.

With the change that Google rolled out today, the sitelinks search box is now more prominent and placed above the sitelinks. The new search box also supports Autocomplete.

The new site links search box has the ability to send the user directly to your website’s own search pages, provided your pages are marked up correctly. Here’s how to markup your site in order to enable this feature.

You need to have a working site-specific search engine for your site. If you already have one, notify Google by marking up your homepage as a entity with the potentialAction property of the markup. You can use JSON-LD, microdata, or RDFa to do this; check out the full implementation details on Google’s developer site.

If the markup is implemented correctly on your site, users will have the ability to jump directly from the sitelinks search box to your site’s search results page. If Google doesn’t detect any markup, users will be shown a Google search results page for the corresponding site: query, just as they have been doing up to this point.