Wednesday, 24 February 2016

Validate site url in Drupal 8

If we have path, we can validate that path using below code. The path may be a direct node or url alias of the node.

Example:

$input_path = "node/1";
$pathValidator = \Drupal::pathValidator()->getUrlIfValid($input_path);
if($pathValidator){
    print "Valid Path";
}else{
    print "Invalid Path";
}


Hope this help.

Tuesday, 9 February 2016

Facebook BigPipe Concept

Site speed is one of the most critical company goals for Facebook. In 2009, we successfully made Facebook site twice as fast, which was blogged in this post. Several key innovations from our engineering team made this possible. In this blog post, I will describe one of the secret weapons we used called BigPipe that underlies this great technology achievement.

BigPipe is a fundamental redesign of the dynamic web page serving system. The general idea is to decompose web pages into small chunks called pagelets, and pipeline them through several execution stages inside web servers and browsers. This is similar to the pipelining performed by most modern microprocessors: multiple instructions are pipelined through different execution units of the processor to achieve the best performance. Although BigPipe is a fundamental redesign of the existing web serving process, it does not require changing existing web browsers or servers; it is implemented entirely in PHP and JavaScript.

Motivation



To understand BigPipe, it's helpful to take a look at the problems with the existing dynamic web page serving system, which dates back to the early days of the World Wide Web and has not changed much since then. Modern websites have become dramatically more dynamic and interactive than 10 years ago, and the traditional page serving model has not kept up with the speed requirements of today's Internet. In the traditional model, the life cycle of a user request is the following:

  1. Browser sends an HTTP request to web server.
  2. Web server parses the request, pulls data from storage tier then formulates an HTML document and sends it to the client in an HTTP response. 
  3. HTTP response is transferred over the Internet to browser. 
  4. Browser parses the response from web server, constructs a DOM tree representation of the HTML document, and downloads CSS and JavaScript resources referenced by the document. 
  5. After downloading CSS resources, browser parses them and applies them to the DOM tree.
  6. After downloading JavaScript resources, browser parses and executes them.


The traditional model is very inefficient for modern web sites, because a lot of the operations in the system are sequential and can’t be overlapped with each other. Some optimization techniques such as delaying JavaScript downloading, parallelizing resource downloading etc. have been widely adopted in the web community to overcome some of the limitations. However, very few of these optimizations touch the bottleneck caused by the web server and browser executing sequentially. When the web server is busy generating a page, the browser is idle and wasting its cycles doing nothing. When web server finishes generating the page and sends it to the browser, the browser becomes the performance bottleneck and the web server cannot help any more. By overlapping the web server’s generation time with the browser’s rendering time, we can not only reduce the end-to-end latency but also make the initial response of the web page visible to the user much earlier, thereby significantly reducing user perceived latency.

The overlapping of web server generation time and browser’s render time is especially useful for content-rich web sites like Facebook. A typical Facebook page contains data from many different data sources: friend list, new feeds, ads, and so on. In a traditional page rendering model a user would have to wait until all of these queries have returned data before building the final document and sending it to the user's computer. One slow query holds up the train for everything else.

How BigPipe works



To exploit the parallelism between web server and browser, BigPipe first breaks web pages into multiple chunks called pagelets. Just as a pipelining microprocessor divides an instruction’s life cycle into multiple stages (such as “instruction fetch”, “instruction decode”, “execution”, “register write back” etc.), BigPipe breaks the page generation process into several stages:

  1. Request parsing: web server parses and sanity checks the HTTP request. 
  2. Data fetching: web server fetches data from storage tier.
  3. Markup generation: web server generates HTML markup for the response. 
  4. Network transport: the response is transferred from web server to browser.
  5. CSS downloading: browser downloads CSS required by the page.
  6. DOM tree construction and CSS styling: browser constructs DOM tree of the document, and then applies CSS rules on it. 
  7. JavaScript downloading: browser downloads JavaScript resources referenced by the page.
  8. JavaScript execution: browser executes JavaScript code of the page.

Source: https://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919

PHP Codesniffer - Ignore warning errors

 Use below command to ignore warnings while generating report. phpcs -n /path_to_directory/ The above command will result only errors and ig...