https://getcomposer.org/doc/articles/troubleshooting.md#proc-open-fork-failed-errors
https://getcomposer.org/doc/articles/troubleshooting.md#proc-open-fork-failed-errors
The start render time is the moment the page stops being blank and the user can actually see something in her browser: some text, a background-color …
The DOMContentLoaded event is fired when the document is done parsing and synchronous scripts are loaded, parsed and executed. (jquery ready function)
The DOMContentLoaded
event triggers on document
when the page is ready. It waits for the full HTML and scripts, and then triggers.
The DOMContentLoaded event is fired when the document has been completely loaded and parsed, without waiting for stylesheets, images, and subframes to finish loading (the load event can be used to detect a fully-loaded page).
“Note: Stylesheet loads block script execution, so if you have a <script> after a <link rel=”stylesheet” …>, the page will not finish parsing – and DOMContentLoaded will not fire – until the stylesheet is loaded.”
Document Complete is actually the point in time when all the content(image, iframe ext) referenced in the HTML is fully-loaded.
It is supported by many elements. For example, external SCRIPT
and IMG
, IFRAME
trigger it when downloading of their content finishes.
The handler window.onload and iframe.onload triggers when the page is fully loaded with all dependent resources including images and styles.
The example with IFRAME:
<iframe src="" width="300" height="150"></iframe>
<script>
document.getElementsByTagName('iframe')[0].onload = function() {
alert('loaded')
}
</script>
To execute the code when the current page finishes loading, use window.onload.
window.onload
is rarely used, because no one wants to wait until all resources load, especially for large pictures. Normally, we need the DOM and scripts to build interfaces. That’s exactly what DOMContentLoaded
is for.
https://www.html5rocks.com/en/tutorials/internals/howbrowserswork/
Tool : https://redbot.org (web page and sources http header analyse)
Tutor: https://www.mnot.net/cache_docs/ (developer of redbot)
Write “web caching” “http caching” to google images
Request Specific Headers
Response Specific Headers
RFC7231: "For example, a response that contains Vary: accept-encoding, accept-language indicates that the origin server might have used the request's Accept-Encoding and Accept-Language fields (or lack thereof) as determining factors while choosing the content for this response."
Expires.
no-cache
: This instruction specifies that any cached content must be re-validated on each request before being served to a client. This, in effect, marks the content as stale immediately, but allows it to use revalidation techniques to avoid re-downloading the entire item again.no-store
: This instruction indicates that the content cannot be cached in any way. This is appropriate to set if the response represents sensitive data.public
: This marks the content as public, which means that it can be cached by the browser and any intermediate caches. For requests that utilized HTTP authentication, responses are marked private
by default. This header overrides that setting.private
: This marks the content as private
. Private content may be stored by the user’s browser, but must not be cached by any intermediate parties. This is often used for user-specific data.max-age
: This setting configures the maximum age that the content may be cached before it must revalidate or re-download the content from the origin server. In essence, this replaces the Expires
header for modern browsing and is the basis for determining a piece of content’s freshness. This option takes its value in seconds with a maximum valid freshness time of one year (31536000 seconds).s-maxage
: This is very similar to the max-age
setting, in that it indicates the amount of time that the content can be cached. The difference is that this option is applied only to intermediary caches. Combining this with the above allows for more flexible policy construction.must-revalidate
: This indicates that the freshness information indicated by max-age
, s-maxage
or the Expires
header must be obeyed strictly. Stale content cannot be served under any circumstance. This prevents cached content from being used in case of network interruptions and similar scenarios.proxy-revalidate
: This operates the same as the above setting, but only applies to intermediary proxies. In this case, the user’s browser can potentially be used to serve stale content in the event of a network interruption, but intermediate caches cannot be used for this purpose.no-transform
: This option tells caches that they are not allowed to modify the received content for performance reasons under any circumstances. This means, for instance, that the cache is not able to send compressed versions of content it did not receive from the origin server compressed and is not allowed.no-store
option supersedes the no-cache
if both are present. For responses to unauthenticated requests, public
is implied. For responses to authenticated requests, private
is implied. These can be overridden by including the opposite option in the Cache-Control
header.Etag
header is used with cache validation. The origin can provide a unique Etag
for an item when it initially serves the content. When a cache needs to validate the content it has on-hand upon expiration, it can send back the Etag
it has for the content. The origin will either tell the cache that the content is the same, or send the updated content (with the new Etag
)Refs:
//this function will work cross-browser for loading scripts asynchronously
function loadScript(src, callback)
{
var s,
r,
t;
r = false;
s = document.createElement('script');
s.type = 'text/javascript';
s.src = src;
s.onload = s.onreadystatechange = function() {
//console.log( this.readyState ); //uncomment this line to see which ready states are called.
if ( !r && (!this.readyState || this.readyState == 'complete') )
{
r = true;
callback();
}
};
t = document.getElementsByTagName('script')[0];
t.parentNode.insertBefore(s, t);
}
If you’ve already got jQuery on the page, just use
$.getScript(url, successCallback)
only /* */ comments
/\*([^*]|[\r\n]|(\*+([^*/]|[\r\n])))*\*+/
with // comments
(/\*([^*]|[\r\n]|(\*+([^*/]|[\r\n])))*\*+/)|(//.*)
CopyFile.bat –>
@echo off
set param=%2
set argv1=%~dpnx1
set argv2=%~dpnx3
if %param% EQU -copy (
if "%argv1%" NEQ "" if "%argv2%" NEQ "" (
for %%a in ("%argv1%") do (
if not exist "%argv2%" mkdir "%argv2%"
copy /y %%a "%argv2%">Nul
)
echo( Finish Copy [%argv1%] to [%argv2%]
) else echo( You need 3 argument!
) else echo( Wrong argument : %param%
pause>Nul
exit
And run it with command:
CopyFile.bat "nameoffile" -copy "C:\Example"
npm
is nodejs package manager. It therefore targets nodejs environments, which usually means server-side nodejs projects or command-line projects (bower itself is a npm package). If you are going to do anything with nodejs, then you are going to use npm.
bower
is a package manager that aims at (front-end) web projects. You need npm and nodejs to install bower and to execute it, though bower packages are not meant specifically for nodejs, but rather for the “browser” environment.
composer
is a dependency manager that targets php projects. If you are doing something with symfony (or plain old php), this is likely the way to go
Summing it up:
And yes, the “json” files describe basic package information and dependencies. And yes, they are needed.
Now, what about the READMEs? 🙂
From : http://stackoverflow.com/questions/11451535/gitignore-not-working
git rm -r –cached .
git add .
git commit -m “fixed untracked files”
On the Linux server, in a new directory do:
git init –shared –bare
Then on your local machine:
git remote add origin server:path/to/repo
git push –all origin
fail2ban-client set YOURJAILNAMEHERE unbanip IPADDRESSHERE
jailname = service name (eg.: ssh, proftp)
fail2ban-client status (jail list)
iptables -L -n (banned ips)