blob: db567b9ca5efe9e53ff6c85447df9af3cb237a3a [file] [log] [blame] [view]
Chris Palmer554c66e2017-07-29 01:02:561# Chrome Security FAQ
2
3[TOC]
4
5<a name="TOC-Why-are-security-bugs-hidden-in-the-Chromium-issue-tracker-"></a>
6## Why are security bugs hidden in the Chromium issue tracker?
7
8We must balance a commitment to openness with a commitment to avoiding
9unnecessary risk for users of widely-used open source libraries.
10
11<a name="TOC-Can-you-please-un-hide-old-security-bugs-"></a>
12## Can you please un-hide old security bugs?
13
14Our goal is to open security bugs to the public once the bug is fixed and the
15fix has been shipped to a majority of users. However, many vulnerabilities
16affect products besides Chromium, and we don’t want to put users of those
17products unnecessarily at risk by opening the bug before fixes for the other
18affected products have shipped.
19
20Therefore, we make all security bugs public within approximately 14 weeks of the
21fix landing in the Chromium repository. The exception to this is in the event of
22the bug reporter or some other responsible party explicitly requesting anonymity
23or protection against disclosing other particularly sensitive data included in
24the vulnerability report (e.g. username and password pairs).
25
26<a name="TOC-Can-I-get-advance-notice-about-security-bugs-"></a>
27## Can I get advance notice about security bugs?
28
29Vendors of products based on Chromium, distributors of operating systems that
30bundle Chromium, and individuals and organizations that significantly contribute
31to fixing security bugs can be added to a list for earlier access to these bugs.
32You can email us at [email protected] to request to join the list if you
33meet the above criteria. In particular, vendors of anti-malware, IDS/IPS,
34vulnerability risk assessment, and similar products or services do not meet this
35bar.
36
37Please note that the safest version of Chrome/Chromium is always the latest
38stable version — there is no good reason to wait to upgrade, so enterprise
39deployments should always track the latest stable release. When you do this,
40there is no need to further assess the risk of Chromium vulnerabilities: we
41strive to fix vulnerabilities quickly and release often.
42
43<a name="TOC-Can-I-see-these-security-bugs-so-that-I-can-back-port-the-fixes-to-my-downstream-project-"></a>
44## Can I see these security bugs so that I can back-port the fixes to my downstream project?
45
46Many developers of other projects use V8, Chromium, and sub-components of
47Chromium in their own projects. This is great! We are glad that Chromium and V8
48suit your needs.
49
50We want to open up fixed security bugs (as described in the previous answer),
51and will generally give downstream developers access sooner. **However, please
52be aware that backporting security patches from recent versions to old versions
53cannot always work.** (There are several reasons for this: The patch won't apply
54to old versions; the solution was to add or remove a feature or change an API;
55the issue may seem minor until it's too late; and so on.) We believe the latest
56stable versions of Chromium and V8 are the most stable and secure. We also
57believe that tracking the latest stable upstream is usually less work for
58greater benefit in the long run than backporting. We strongly recommend that you
59track the latest stable branches, and we support only the latest stable branch.
60
61<a name="TOC-Are-privacy-issues-considered-security-bugs-"></a>
62## Are privacy issues considered security bugs?
63
64Privacy bugs, such as leaking information from Incognito, fingerprinting, and
65bugs related to deleting browsing data are not considered under the security
66VRP. The Chrome Privacy team tracks them as functional bugs.
67
Eric Lawrence15fdea252017-08-09 19:37:4168<a name="TOC-Timing-Attacks"></a>
69## Are timing attacks considered security vulnerabilities?
70
71Some timing attacks are considered security vulnerabilities, and some are
72considered privacy vulnerabilities. Timing attacks vary significantly in terms
73of impact, reliability, and exploitability.
74
75Some timing attacks weaken mitigations like ASLR (e.g.
76[Issue 665930](https://crbug.com/665930)). Others attempt to circumvent the same
77origin policy, for instance, by using SVG filters to read pixels
78cross-origin (e.g. [Issue 686253](https://crbug.com/686253) and
79[Issue 615851](https://crbug.com/615851)).
80
81Many timing attacks rely upon the availability of high-resolution timing
82information [Issue 508166](https://crbug.com/508166); such timing data often has
83legitimate usefulness in non-attack scenarios making it unappealing to remove.
84
85Timing attacks against the browser's HTTP Cache (like
86[Issue 74987](https://crbug.com/74987)) can potentially leak information about
87which sites the user has previously loaded. The browser could attempt to protect
88against such attacks (e.g. by bypassing the cache) at the cost of performance
89and thus user-experience. To mitigate against such timing attacks, end-users can
90delete browsing history and/or browse sensitive sites using Chrome's Incognito
91or Guest browsing modes.
92
93Other timing attacks can be mitigated via clever design changes. For instance,
94[Issue 544765](https://crbug.com/544765) describes an attack whereby an attacker
95can probe for the presence of HSTS rules (set by prior site visits) by timing
96the load of resources with URLs "fixed-up" by HSTS. HSTS rules are shared
97between regular browsing and Incognito mode, making the attack more interesting.
98The attack was mitigated by changing Content-Security-Policy such that secure
99URLs will match rules demanding non-secure HTTP urls, a fix that has also proven
100useful to help to unblock migrations to HTTPS. Similarly,
101[Issue 707071](https://crbug.com/707071) describes a timing attack in which an
102attacker could determine what Android applications are installed; the attack was
103mitigated by introducing randomness in the execution time of the affected API.
104
Chris Palmer554c66e2017-07-29 01:02:56105<a name="TOC-What-are-the-security-and-privacy-guarantees-of-Incognito-mode-"></a>
106## What are the security and privacy guarantees of Incognito mode?
107
108Bugs in Incognito mode are tracked as privacy bugs, not security bugs.
109
110The [Help
111Center](https://support.google.com/chrome/answer/95464?hl=en&p=cpn_incognito)
112explains what privacy protections Incognito mode attempts to enforce. In
113particular, please note that Incognito is not a “do not track” mode, and it does
114not hide aspects of your identity from web sites. Chrome does offer a way to
115send Do Not Track request to servers; see chrome://settings/?search=do+not+track
116
117When in Incognito mode, Chrome does not store any new history, cookies, or other
118state in non-volatile storage. However, Incognito windows will be able to access
119some previously-stored state, such as browsing history.
120
121<a name="TOC-Are-denial-of-service-issues-considered-security-bugs-"></a>
122## Are denial of service issues considered security bugs?
123
124Denial of Service (DoS) issues are treated as **abuse** or **stability** issues
125rather than security vulnerabilities.
126
127* If you find a reproducible crash, we encourage you to [report
128 it](https://bugs.chromium.org/p/chromium/issues/entry?template=Crash%20Report).
129* If you find a site that is abusing the user experience (e.g. preventing you
130 from leaving a site), we encourage you to [report
131 it](https://crbug.com/new).
132
133DoS issues are not considered under the security vulnerability rewards program;
134the [severity
135guidelines](https://www.chromium.org/developers/severity-guidelines) outline the
136types of bugs that are considered security vulnerabilities in more detail.
137
138<a name="TOC-Are-XSS-filter-bypasses-considered-security-bugs-"></a>
139## Are XSS filter bypasses considered security bugs?
140
141No. Chromium contains a reflected XSS filter (called XSSAuditor) that is a
142best-effort second line of defense against reflected XSS flaws found in web
143sites. We do not treat these bypasses as security bugs in Chromium because the
144underlying issue is in the web site itself. We treat them as functional bugs,
145and we do appreciate such reports.
146
147The XSSAuditor is not able to defend against persistent XSS or DOM-based XSS.
148There will also be a number of infrequently occurring reflected XSS corner
149cases, however, that it will never be able to cover.
150
151<a name="TOC-Why-aren-t-physically-local-attacks-in-Chrome-s-threat-model-"></a>
152## Why aren't physically-local attacks in Chrome's threat model?
153
154People sometimes report that they can compromise Chrome by installing a
155malicious DLL in a place where Chrome will load it, by hooking APIs (e.g. [Issue
156130284](https://crbug.com/130284)), or by otherwise altering the configuration
157of the PC.
158
159We consider these attacks outside Chrome's threat model, because there is no way
160for Chrome (or any application) to defend against a malicious user who has
161managed to log into your computer as you, or who can run software with the
162privileges of your operating system user account. Such an attacker can modify
163executables and DLLs, change environment variables like `PATH`, change
164configuration files, read any data your user account owns, email it to
165themselves, and so on. Such an attacker has total control over your computer,
166and nothing Chrome can do would provide a serious guarantee of defense. This
167problem is not special to Chrome ­— all applications must trust the
168physically-local user.
169
170There are a few things you can do to mitigate risks from people who have
171physical control over **your** computer, in certain circumstances.
172
173* To stop people from reading your data in cases of device theft or loss, use
174 full disk encryption (FDE). FDE is a standard feature of most operating
175 systems, including Windows Vista and later, Mac OS X Lion and later, and
176 some distributions of Linux. (Some older versions of Mac OS X had partial
177 disk encryption: they could encrypt the user’s home folder, which contains
178 the bulk of a user’s sensitive data.) Some FDE systems allow you to use
179 multiple sources of key material, such as the combination of both a
180 password and a key file on a USB token. When available, you should use
181 multiple sources of key material to achieve the strongest defense. Chrome
182 OS encrypts users’ home directories.
183* If you share your computer with other people, take advantage of your
184 operating system’s ability to manage multiple login accounts, and use a
185 distinct account for each person. For guests, Chrome OS has a built-in
186 Guest account for this purpose.
187* Take advantage of your operating system’s screen lock feature.
188* You can reduce the amount of information (including credentials like
189 cookies and passwords) that Chrome will store locally by using Chrome's
190 Content Settings (chrome://settings/content) and turning off the form
191 auto-fill and password storage features
192 ([chrome://settings/search#password](chrome://settings/search#password)).
193
194There is almost nothing you can do to mitigate risks when using a **public**
195computer.
196
197* Assume everything you do on a public computer will become, well, public.
198 You have no control over the operating system or other software on the
199 machine, and there is no reason to trust the integrity of it.
200* If you must use such a computer, consider using an incognito mode window,
201 to avoid persisting credentials. This, however, provides no protection
202 when the system is already compromised as above.
203
204<a name="TOC-Why-aren-t-compromised-infected-machines-in-Chrome-s-threat-model-"></a>
205## Why aren't compromised/infected machines in Chrome's threat model?
206
207This is essentially the same situation as with physically-local attacks. The
208attacker's code, when it runs as your user account on your machine, can do
209anything you can do. (See also [Microsoft's Ten Immutable Laws Of
210Security](https://technet.microsoft.com/en-us/library/hh278941.aspx).)
211
212<a name="TOC-What-about-unmasking-of-passwords-with-the-developer-tools-"></a>
213## What about unmasking of passwords with the developer tools?
214
215One of the most frequent reports we receive is password disclosure using the
216Inspect Element feature (see [Issue 126398](https://crbug.com/126398) for an
217example). People reason that "If I can see the password, it must be a bug."
218However, this is just one of the [physically-local attacks described in the
219previous
220section](#TOC-Why-aren-t-physically-local-attacks-in-Chrome-s-threat-model-),
221and all of those points apply here as well.
222
223The reason the password is masked is only to prevent disclosure via
224"shoulder-surfing" (i.e. the passive viewing of your screen by nearby persons),
225not because it is a secret unknown to the browser. The browser knows the
226password at many layers, including JavaScript, developer tools, process memory,
227and so on. When you are physically local to the computer, and only when you are
228physically local to the computer, there are, and always will be, tools for
229extracting the password from any of these places.
230
231<a name="TOC-Does-entering-JavaScript:-URLs-in-the-URL-bar-or-running-script-in-the-developer-tools-mean-there-s-an-XSS-vulnerability-"></a>
Avi Drissman36d4e2e2017-07-31 20:54:39232## Does entering JavaScript: URLs in the URL bar or running script in the developer tools mean there's an XSS vulnerability?
Chris Palmer554c66e2017-07-29 01:02:56233
234No. Chrome does not attempt to prevent the user from knowingly running script
235against loaded documents, either by entering script in the Developer Tools
236console or by typing a JavaScript: URI into the URL bar. Chrome and other
237browsers do undertake some efforts to prevent *paste* of script URLs in the URL
238bar (to limit
239[social-engineering](https://blogs.msdn.microsoft.com/ieinternals/2011/05/19/socially-engineered-xss-attacks/))
240but users are otherwise free to invoke script against pages using either the URL
241bar or the DevTools console.
242
243Similarly, users may create bookmarks pointed at JavaScript URLs that will run
244on the currently-loaded page when the user clicks the bookmark; these are called
245[bookmarklets](https://en.wikipedia.org/wiki/Bookmarklet).
246
247<a name="TOC-Is-Chrome-s-support-for-userinfo-in-HTTP-URLs-e.g.-http:-user:password-example.com-considered-a-vulnerability-"></a>
248## Is Chrome's support for userinfo in HTTP URLs (e.g. http://user:[email protected]) considered a vulnerability?
249
250[Not at this time](https://crbug.com/626951). Chrome supports HTTP and HTTPS
251URIs with username and password information embedded within them for
252compatibility with sites that require this feature. Notably, Chrome will
253suppress display of the username and password information after navigation in
254the URL box to limit the effectiveness of spoofing attacks that may try to
255mislead the user. For instance, navigating to
256`http://[email protected]` will show an address of
257`http://evil.example.com` after the page loads.
258
259<a name="TOC-Why-isn-t-passive-browser-fingerprinting-including-passive-cookies-in-Chrome-s-threat-model-"></a>
260## Why isn't passive browser fingerprinting (including passive cookies) in Chrome's threat model?
261
262As discussed in [Issue 49075](https://crbug.com/49075), we currently do not
263attempt to defeat "passive fingerprinting" or
264"[evercookies](http://en.wikipedia.org/wiki/Evercookie)" or [ETag
265cookies](http://en.wikipedia.org/wiki/HTTP_ETag#Tracking_using_ETags), because
266defeating such fingerprinting is likely not practical without fundamental
267changes to how the Web works. One needs roughly 33 bits of non-correlated,
268distinguishing information to have a good chance of telling apart most user
269agents on the planet (see [Arvind Narayanan's site](http://33bits.org/about/)
270and [Peter Eckersley's discussion of the information theory behind
271Panopticlick](https://www.eff.org/deeplinks/2010/01/primer-information-theory-and-privacy).)
272
273Although Chrome developers could try to reduce the fingerprintability of the
274browser by taking away (e.g.) JavaScript APIs, doing so would not achieve the
275security goal for a few reasons: (a) we could not likely get the
276distinguishability below 33 bits; (b) reducing fingerprintability requires
277breaking many (or even most) useful web features; and (c) so few people would
278tolerate the breakage that it would likely be easier to distinguish people who
279use the fingerprint-defense configuration. (See "[Anonymity Loves Company:
280Usability and the Network
281Effect](http://freehaven.net/anonbib/cache/usability:weis2006.pdf)" by
282Dingledine and Mathewson for more information.)
283
284There is a pretty good analysis of in-browser fingerprinting vectors on [this
285wiki
286page](https://dev.chromium.org/Home/chromium-security/client-identification-mechanisms).
287Browser vectors aside, it's possible that the browser could be accurately
288fingerprinted entirely passively, without access to JavaScript or other web
289features or APIs, by its network traffic profile alone. (See e.g. *[Silence on
290the Wire](http://lcamtuf.coredump.cx/silence.shtml#/)* by Michal Zalewski
291generally.)
292
293Since we don't believe it's feasible to provide some mode of Chrome that can
294truly prevent passive fingerprinting, we will mark all related bugs and feature
295requests as WontFix.
296
297<a name="TOC-Where-are-the-security-indicators-located-in-the-browser-window-"></a>
298## Where are the security indicators located in the browser window?
299
300The topmost portion of the browser window, consisting of the **Omnibox** (or
301**Location Bar**), navigation icons, menu icon, and other indicator icons, is
302sometimes called the browser **chrome** (not to be confused with the Chrome
303Browser itself). Actual security indicators can only appear in this section of
304the window. There can be no trustworthy security indicators elsewhere.
305
306Furthermore, Chrome can only guarantee that it is correctly representing URLs
307and their origins at the end of all navigation. Quirks of URL parsing, HTTP
308redirection, and so on are not security concerns unless Chrome is
309misrepresenting a URL or origin after navigation has completed.
310
311Browsers present a dilemma to the user since the output is a combination of
312information coming from both trustworthy sources (the browser itself) and
313untrustworthy sources (the web page), and the untrustworthy sources are allowed
314virtually unlimited control over graphical presentation. The only restriction on
315the page's presentation is that it is confined to the large rectangular area
316directly underneath the chrome, called the **viewport**. Things like hover text
317and URL preview(s), shown in the viewport, are entirely under the control of the
318web page itself. They have no guaranteed meaning, and function only as the page
319desires. This can be even more confusing when pages load content that looks like
320chrome. For example, many pages load images of locks, which look similar to the
321meaningful HTTPS lock in the Omnibox, but in fact do not convey any meaningful
322information about the transport security of that page.
323
324When the browser needs to show trustworthy information, such as the bubble
325resulting from a click on the lock icon, it does so by making the bubble overlap
326chrome. In the case of the lock bubble, it is a small triangular bump in the
327border of the bubble that overlays the chrome. This visual detail can't be
328imitated by the page itself since the page is confined to the viewport.
329
330<a name="TOC-Why-does-Chrome-show-a-green-lock-even-if-my-HTTPS-connection-is-being-proxied-"></a>
331## Why does Chrome show a green lock, even if my HTTPS connection is being proxied?
332
333Some types of software intercept HTTPS connections. Examples include anti-virus
334software, corporate network monitoring tools, and school censorship software. In
335order for the interception to work, you need to install a private trust anchor
336(root certificate) onto your computer. This may have happened when you installed
337your anti-virus software, or when your company's network administrator set up
338your computer. If that has occurred, your HTTPS connections can be viewed or
339modified by the software.
340
341Since you have allowed the trust anchor to be installed onto your computer,
342Chrome assumes that you have consented to HTTPS interception. Anyone who can add
343a trust anchor to your computer can make other changes to your computer, too,
344including changing Chrome. (See also [Why aren't physically-local attacks in
Avi Drissman36d4e2e2017-07-31 20:54:39345Chrome's threat model?](#TOC-Why-aren-t-physically-local-attacks-in-Chrome-s-threat-model-).)
Chris Palmer554c66e2017-07-29 01:02:56346
347<a name="TOC-Why-can-t-I-select-Proceed-Anyway-on-some-HTTPS-error-screens-"></a>
348## Why can’t I select Proceed Anyway on some HTTPS error screens?
349
350A key guarantee of HTTPS is that Chrome can be relatively certain that it is
351connecting to the true web server and not an impostor. Some sites request an
352even higher degree of protection for their users (i.e. you): they assert to
353Chrome (via Strict Transport Security —
354[HSTS](http://tools.ietf.org/html/rfc6797) — or by other means) that any
355server authentication error should be fatal, and that Chrome must close the
356connection. If you encounter such a fatal error, it is likely that your network
357is under attack, or that there is a network misconfiguration that is
358indistinguishable from an attack.
359
360The best thing you can do in this situation is to raise the issue to your
361network provider (or corporate IT department).
362
363Chrome shows non-recoverable HTTPS errors only in cases where the true server
364has previously asked for this treatment, and when it can be relatively certain
365that the current server is not the true server.
366
367<a name="TOC-How-does-key-pinning-interact-with-local-proxies-and-filters-"></a>
368## How does key pinning interact with local proxies and filters?
369
370To enable certificate chain validation, Chrome has access to two stores of trust
371anchors: certificates that are empowered as issuers. One trust anchor store is
372the system or public trust anchor store, and the other other is the local or
373private trust anchor store. The public store is provided as part of the
374operating system, and intended to authenticate public internet servers. The
375private store contains certificates installed by the user or the administrator
376of the client machine. Private intranet servers should authenticate themselves
377with certificates issued by a private trust anchor.
378
379Chrome’s key pinning feature is a strong form of web site authentication that
380requires a web server’s certificate chain not only to be valid and to chain to a
381known-good trust anchor, but also that at least one of the public keys in the
382certificate chain is known to be valid for the particular site the user is
383visiting. This is a good defense against the risk that any trust anchor can
384authenticate any web site, even if not intended by the site owner: if an
385otherwise-valid chain does not include a known pinned key (“pin”), Chrome will
386reject it because it was not issued in accordance with the site operator’s
387expectations.
388
389Chrome does not perform pin validation when the certificate chain chains up to a
390private trust anchor. A key result of this policy is that private trust anchors
391can be used to proxy (or
392[MITM](http://en.wikipedia.org/wiki/Man-in-the-middle_attack)) connections, even
393to pinned sites. “Data loss prevention” appliances, firewalls, content filters,
394and malware can use this feature to defeat the protections of key pinning.
395
396We deem this acceptable because the proxy or MITM can only be effective if the
397client machine has already been configured to trust the proxy’s issuing
398certificate — that is, the client is already under the control of the person who
399controls the proxy (e.g. the enterprise’s IT administrator). If the client does
400not trust the private trust anchor, the proxy’s attempt to mediate the
401connection will fail as it should.
402
403<a name="TOC-Can-I-use-EMET-to-help-protect-Chrome-against-attack-on-Microsoft-Windows-"></a>
404## Can I use EMET to help protect Chrome against attack on Microsoft Windows?
405
406There are [known compatibility
407problems](https://sites.google.com/a/chromium.org/dev/Home/chromium-security/chromium-and-emet)
408between Microsoft's EMET anti-exploit toolkit and some versions of Chrome. These
409can prevent Chrome from running in some configurations. Moreover, the Chrome
410security team does not recommend the use of EMET with Chrome because its most
411important security benefits are redundant with or superseded by built-in attack
412mitigations within the browser. For users, the very marginal security benefit is
413not usually a good trade-off for the compatibility issues and performance
414degradation the toolkit can cause.
415
416<a name="TOC-Why-are-some-web-platform-features-only-available-in-HTTPS-page-loads-"></a>
417## Why are some web platform features only available in HTTPS page-loads?
418
419The full answer is here: we [Prefer Secure Origins For Powerful New
420Features](https://www.chromium.org/Home/chromium-security/prefer-secure-origins-for-powerful-new-features).
421In short, many web platform features give web origins access to sensitive new
422sources of information, or significant power over a user's experience with their
423computer/phone/watch/et c., or over their experience with it. We would therefore
424like to have some basis to believe the origin meets a minimum bar for security,
425that the sensitive information is transported over the Internet in an
426authetnicated and confidential way, and that users can make meaningful choices
427to trust or not trust a web origin.
428
429Note that the reason we require secure origins for WebCrypto is slightly
430different: An application that uses WebCrypto is almost certainly using it to
431provide some kind of security guarantee (e.g. encrypted instant messages or
432email). However, unless the JavaScript was itself transported to the client
433securely, it cannot actually provide any guarantee. (After all, a MITM attacker
434could have modified the code, if it was not transported securely.)
435
436<a name="TOC-Which-origins-are-secure-"></a>
437## Which origins are "secure"?
438
439Secure origins are those that match at least one of the following (scheme, host,
440port) patterns:
441
442* (https, *, *)
443* (wss, *, *)
444* (*, localhost, *)
445* (*, 127/8, *)
446* (*, ::1/128, *)
447* (file, *, —)
448* (chrome-extension, *, —)
449
450That is, secure origins are those that load resources either from the local
451machine (necessarily trusted) or over the network from a
452cryptographically-authenticated server. See [Prefer Secure Origins For Powerful
453New
454Features](https://sites.google.com/a/chromium.org/dev/Home/chromium-security/prefer-secure-origins-for-powerful-new-features)
455for more details.
456
457<a name="TOC-What-s-the-story-with-certificate-revocation-"></a>
458## What's the story with certificate revocation?
459
460Chrome's primary mechanism for checking the revocation status of HTTPS
461certificates is
462[CRLsets](http://dev.chromium.org/Home/chromium-security/crlsets).
463
464Chrome also supports Online Certificate Status Protocol (OCSP). However, the
465effectiveness of OCSP is is essentially 0 unless the client fails hard (refuses
466to connect) if it cannot get a live, valid OCSP response. No browser has OCSP
467set to hard-fail by default, for good reasons explained by Adam Langley (see
468[https://www.imperialviolet.org/2014/04/29/revocationagain.html](https://www.imperialviolet.org/2014/04/29/revocationagain.html) and
469[https://www.imperialviolet.org/2014/04/19/revchecking.html](https://www.imperialviolet.org/2014/04/19/revchecking.html)).
470
471Stapled OCSP with the Must Staple option (hard-fail if a valid OCSP response is
472not stapled to the certificate) is a much better solution to the revocation
473problem than non-stapled OCSP. CAs and browsers are working toward that solution
474(see the
475[Internet-Draft](http://tools.ietf.org/html/draft-hallambaker-tlssecuritypolicy-03)).
476
477Additionally, non-stapled OCSP poses a privacy problem: in order to check the
478status of a certificate, the client must query an OCSP responder for the status
479of the certificate, thus exposing a user's HTTPS browsing history to the
480responder (a third party).
481
482That said, you can use enterprise policies to [enable soft-fail
483OCSP](https://www.chromium.org/administrators/policy-list-3#EnableOnlineRevocationChecks)
484and hard-fail OCSP for [local trust
485anchors](https://www.chromium.org/administrators/policy-list-3#RequireOnlineRevocationChecksForLocalAnchors).
486
487Chrome performs online checking for [Extended
488Validation](https://cabforum.org/about-ev-ssl/) certificates if it does not
489already have a non-expired CRLSet entry covering the domain. If Chrome does not
490get a response, it simply downgrades the security indicator to Domain Validated.
491
492See also [Issue 361820](https://crbug.com/361820) for more discussion of the
493user-facing UX.
494
495<a name="TOC-Why-does-the-Password-Manager-ignore-autocomplete-off-for-password-fields-"></a>
496## Why does the Password Manager ignore `autocomplete='off'` for password fields?
497
498Ignoring `autocomplete='off'` for password fields allows the password manager to
499give more power to users to manage their credentials on websites. It is the
500security team's view that this is very important for user security by allowing
501users to have unique and more complex passwords for websites. As it was
502originally implemented, autocomplete='off' for password fields took control away
503from the user and gave control to the web site developer, which was also a
504violation of the [priority of
505constituencies](http://www.schemehostport.com/2011/10/priority-of-constituencies.html).
506For a longer discussion on this, see the [mailing list
507announcement](https://groups.google.com/a/chromium.org/forum/#!topic/chromium-dev/zhhj7hCip5c).
508
509<a name="TOC-Why-doesn-t-the-Password-Manager-save-my-Google-password-if-I-am-using-Chrome-Sync-"></a>
510## Why doesn't the Password Manager save my Google password if I am using Chrome Sync?
511
512In its default mode, Chrome Sync uses your Google password to protect all the
513other passwords in the Chrome Password Manager.
514
515In general, it is a bad idea to store the credential that protects an asset in
516the same place as the asset itself. An attacker who could temporarily compromise
517the Chrome Password Manager could, by stealing your Google password, obtain
518continuing access to all your passwords. Imagine you store your valuables in a
519safe, and you accidentally forget to close the safe. If a thief comes along,
520they might steal all of your valuables. That’s bad, but imagine if you had also
521left the combination to the safe inside as well. Now the bad guy has access to
522all of your valuables and all of your future valuables, too. The password
523manager is similar, except you probably would not even know if a bad guy
524accessed it.
525
526To prevent this type of attack, Chrome Password Manager does not save the Google
527password for the account you sync with Chrome. If you have multiple Google
528accounts, the Chrome Password Manager will save the passwords for accounts other
529than the one you are syncing with.
530
531<a name="TOC-Does-the-Password-Manager-store-my-passwords-encrypted-on-disk-"></a>
532## Does the Password Manager store my passwords encrypted on disk?
533
534Chrome generally tries to use the operating system's user storage mechanism
535wherever possible and stores them encrypted on disk, but it is platform
536specific:
537
538* On Windows, Chrome uses the [Data Protection API
539 (DPAPI)](https://msdn.microsoft.com/en-us/library/ms995355.aspx) to bind
540 your passwords to your user account and store them on disk encrypted with
541 a key only accessible to processes running as the same logged on user.
542* On macOS, Chrome previously stored credentials directly in the user's
543 Keychain, but for technical reasons, it has switched to storing the
544 credentials in "Login Data" in the Chrome users profile directory, but
545 encrypted on disk with a key that is then stored in the user's Keychain.
546 See [Issue 466638](https://crbug.com/466638) for further explanation.
547* On Linux, credentials are stored in an encrypted database, and the password
548 to decrypt the contents of that database are stored in KWallet or Gnome
549 Keyring. (See [Issue 602624](https://crbug.com/602624).)
550* On iOS, passwords are currently stored directly in the iOS Keychain and
551 referenced from the rest of the metadata stored in a separate DB. The plan
552 there is to just store them in plain text in the DB, because iOS gives
553 strong guarantees about only Chrome being able to access its storage. See
554 [Issue 520437](https://crbug.com/520437) to follow this migration.
555
556<a name="TOC-I-found-a-phishing-or-malware-site-not-blocked-by-Safe-Browsing.-Is-this-a-security-vulnerability-"></a>
557## I found a phishing or malware site not blocked by Safe Browsing. Is this a security vulnerability?
558
559Malicious sites not yet blocked by Safe Browsing can be reported via
560[https://www.google.com/safebrowsing/report_phish/](https://www.google.com/safebrowsing/report_phish/).
561Safe Browsing is primarily a blocklist of known-unsafe sites; the feature warns
562the user if they attempt to navigate to a site known to deliver phishing or
563malware content. You can learn more about this feature in these references:
564
565* [https://developers.google.com/safe-browsing/](https://developers.google.com/safe-browsing/)
566* [https://www.google.com/transparencyreport/safebrowsing/](https://www.google.com/transparencyreport/safebrowsing/)
567
568In general, it is not considered a security bug if a given malicious site is not
569blocked by the Safe Browsing feature, unless the site is on the blocklist but is
570allowed to load anyway. For instance, if a site found a way to navigate through
571the blocking red warning page without user interaction, that would be a security
572bug. A malicious site may exploit a security vulnerability (for instance,
573spoofing the URL in the **Location Bar**). This would be tracked as a security
574vulnerability in the relevant feature, not Safe Browsing itself.
575
576<a name="TOC-What-is-the-security-story-for-Service-Workers-"></a>
577## What is the security story for Service Workers?
578
579See our dedicated [Service Worker Security
Eric Lawrence15fdea252017-08-09 19:37:41580FAQ](https://chromium.googlesource.com/chromium/src/+/master/docs/security/service-worker-security-faq.md).
Chris Palmer554c66e2017-07-29 01:02:56581
582## TODO
583
584* Move https://www.chromium.org/developers/severity-guidelines into MD as
585 well, and change links here to point to it.
586* https://dev.chromium.org/Home/chromium-security/client-identification-mechanisms