hasantokatli tarafından yazılmış tüm yazılar

http – browser process

Start Render Time

The start render time is the moment the page stops being blank and the user can actually see something in her browser: some text, a background-color …

DOMContentLoaded event

The DOMContentLoaded event is fired when the document is done parsing and synchronous scripts are loaded, parsed and executed. (jquery ready function)

The DOMContentLoaded event triggers on document when the page is ready. It waits for the full HTML and scripts, and then triggers.

The DOMContentLoaded event is fired when the document has been completely loaded and parsed, without waiting for stylesheets, images, and subframes to finish loading (the load event can be used to detect a fully-loaded page).

“Note: Stylesheet loads block script execution, so if you have a <script> after a <link rel=”stylesheet” …>, the page will not finish parsing – and DOMContentLoaded will not fire – until the stylesheet is loaded.”

Document Complete / onload event

Document Complete is actually the point in time when all the content(image, iframe ext) referenced in the HTML is fully-loaded.

It is supported by many elements. For example, external SCRIPTand IMG, IFRAME trigger it when downloading of their content finishes.

The handler window.onload and iframe.onload triggers when the page is fully loaded with all dependent resources including images and styles.
The example with IFRAME:

<iframe src="" width="300" height="150"></iframe>
document.getElementsByTagName('iframe')[0].onload = function() {

To execute the code when the current page finishes loading, use window.onload.

window.onload is rarely used, because no one wants to wait until all resources load, especially for large pictures. Normally, we need the DOM and scripts to build interfaces. That’s exactly what DOMContentLoaded is for.


Web Caching

Tool : https://redbot.org (web page and sources http header analyse)
Tutor: https://www.mnot.net/cache_docs/ (developer of redbot)
Write “web caching” “http caching” to google images

Request Specific Headers

  • Accept*:
    Client tells to the server(or to proxy) which types of seetings/properties it has and expects response content.
  • Accept-Encoding:
    Which compressing alghoritms agent supports (gzip,deflate -> traditional and general usage; sdch->just google).
  • Accept-Language:
    A list of languages (and dialects) that the client has configured.

Response Specific Headers

  • Vary: Tells to the client agent that request responses may generate different contents according to the request headers listed in vary. Eg. if Response Header Vary has
    accept-language, server may produce different web page content according to request accept-language values.

    RFC7231: "For example, a response that contains
         Vary: accept-encoding, accept-language
       indicates that the origin server might have used the request's
       Accept-Encoding and Accept-Language fields (or lack thereof) as
       determining factors while choosing the content for this response."
  • Content-Encoding:
    In which compressing alghoritm server used for compressing response body. Client agent (mostly browsers) decompress body content using this algorithm.
  • Cache-Control:
    Was introduced in HTTP/1.1 and offers more options than Expires. They can be used to accomplish the same thing but the data value for Expires is an HTTP date whereas Cache-Control max-age lets you specify a relative amount of time so you could specify “X hours after the page was requested”.
    “When both Cache-Control and Expires are present, Cache-Control takes precedence.”
    Expire can be used for fallback.

    • no-cache: This instruction specifies that any cached content must be re-validated on each request before being served to a client. This, in effect, marks the content as stale immediately, but allows it to use revalidation techniques to avoid re-downloading the entire item again.
    • no-store: This instruction indicates that the content cannot be cached in any way. This is appropriate to set if the response represents sensitive data.
    • public: This marks the content as public, which means that it can be cached by the browser and any intermediate caches. For requests that utilized HTTP authentication, responses are marked private by default. This header overrides that setting.
    • private: This marks the content as private. Private content may be stored by the user’s browser, but must not be cached by any intermediate parties. This is often used for user-specific data.
    • max-age: This setting configures the maximum age that the content may be cached before it must revalidate or re-download the content from the origin server. In essence, this replaces the Expiresheader for modern browsing and is the basis for determining a piece of content’s freshness. This option takes its value in seconds with a maximum valid freshness time of one year (31536000 seconds).
    • s-maxage: This is very similar to the max-age setting, in that it indicates the amount of time that the content can be cached. The difference is that this option is applied only to intermediary caches. Combining this with the above allows for more flexible policy construction.
    • must-revalidate: This indicates that the freshness information indicated by max-age, s-maxage or the Expires header must be obeyed strictly. Stale content cannot be served under any circumstance. This prevents cached content from being used in case of network interruptions and similar scenarios.
    • proxy-revalidate: This operates the same as the above setting, but only applies to intermediary proxies. In this case, the user’s browser can potentially be used to serve stale content in the event of a network interruption, but intermediate caches cannot be used for this purpose.
    • no-transform: This option tells caches that they are not allowed to modify the received content for performance reasons under any circumstances. This means, for instance, that the cache is not able to send compressed versions of content it did not receive from the origin server compressed and is not allowed.
      The no-store option supersedes the no-cache if both are present. For responses to unauthenticated requests, public is implied. For responses to authenticated requests, private is implied. These can be overridden by including the opposite option in the Cache-Control header.
      Public vs. Private: A visitor that accesed to a content (html, image, js ext.) after an authentication process, visitor browser can cache content as private but other level cache systems can’t cache that content (eg. proxies). Otherwise different visitors may access user-specific content.
  • Expire:
    Basically, it sets a time in the future when the content will expire.
  • ETag:
    The Etag header is used with cache validation. The origin can provide a unique Etag for an item when it initially serves the content. When a cache needs to validate the content it has on-hand upon expiration, it can send back the Etag it has for the content. The origin will either tell the cache that the content is the same, or send the updated content (with the new Etag)
  • Last-Modified:
    This header specifies the last time that the item was modified. This may be used as part of the validation strategy to ensure fresh content.

Async script loading


  • http://stackoverflow.com/questions/7718935/load-scripts-asynchronously
  • https://www.igvita.com/2014/05/20/script-injected-async-scripts-considered-harmful/

//this function will work cross-browser for loading scripts asynchronously
function loadScript(src, callback)
var s,
r = false;
s = document.createElement('script');
s.type = 'text/javascript';
s.src = src;
s.onload = s.onreadystatechange = function() {
//console.log( this.readyState ); //uncomment this line to see which ready states are called.
if ( !r && (!this.readyState || this.readyState == 'complete') )
r = true;
t = document.getElementsByTagName('script')[0];
t.parentNode.insertBefore(s, t);

If you’ve already got jQuery on the page, just use

$.getScript(url, successCallback)

windows cmd bat file pass parameter sample

CopyFile.bat –>

@echo off
set param=%2
set argv1=%~dpnx1
set argv2=%~dpnx3
if %param% EQU -copy (
    if "%argv1%" NEQ "" if "%argv2%" NEQ "" (
        for %%a in ("%argv1%") do (
            if not exist "%argv2%" mkdir "%argv2%"
            copy /y %%a "%argv2%">Nul
    echo( Finish Copy [%argv1%] to [%argv2%]
    ) else echo( You need 3 argument!
) else echo( Wrong argument : %param%

And run it with command:

CopyFile.bat "nameoffile" -copy "C:\Example"

npm vs bower vs composer

npm is nodejs package manager. It therefore targets nodejs environments, which usually means server-side nodejs projects or command-line projects (bower itself is a npm package). If you are going to do anything with nodejs, then you are going to use npm.

bower is a package manager that aims at (front-end) web projects. You need npm and nodejs to install bower and to execute it, though bower packages are not meant specifically for nodejs, but rather for the “browser” environment.

composer is a dependency manager that targets php projects. If you are doing something with symfony (or plain old php), this is likely the way to go

Summing it up:

  • doing node? you do npm
  • doing php? try composer
  • front-end javascript? try bower

And yes, the “json” files describe basic package information and dependencies. And yes, they are needed.

Now, what about the READMEs? 🙂