Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update 50.2.1 (v4.0.3-14.4.3) and/or split requirement for content-security-policy #1958

Open
elarlang opened this issue May 14, 2024 · 9 comments
Labels
1) Discussion ongoing Issue is opened and assigned but no clear proposal yet next meeting Filter for leaders V50 Group issues related to Web Frontend _5.0 - prep This needs to be addressed to prepare 5.0

Comments

@elarlang
Copy link
Collaborator

elarlang commented May 14, 2024

Spin-off from #1311

We have 3 Content-Security-Policy related topics to discuss:

  • discussion 1 - default level-1 requirement to say, that browsers should only communicate with and load content from trusted parties
  • discussion 2 - requirement for document structure integrity
  • discussion 3 - allow-list vs nonces, stays in issue New CSP Requirement #1311

Note, that we also have CSP related requirement 50.2.5 (v4.0.3-14.4.7):

Verify that the content of a web application cannot be embedded in a third-party site by default and that embedding of the exact resources is only allowed where necessary by using suitable Content-Security-Policy: frame-ancestors. Note that the X-Frame-Options solution is obsoleted.

As it is a separate attack vector (ui redress), then it can stay separately. Also, it is 1st layer of defense, most of the CSP has the goal to be 2nd layer of defense against the so-called XSS. So it can stay separately.

Discussion 1

#1311 (comment)

Verify that the Content-Security-Policy header defines fetch, document, and navigation directives based on application needs, without allowing the browser to communicate any other resource and deny everything by default.

Entire discussion is in the #1311, TLDR is - we can not say technically directives and values, which are suitable for every application, but we need to describe the principle - the application must communicate with only trusted parties.

Discussion 2

#1311 (comment)

Instead of saying "Do not use 'unsafe-inline'" we should try to explain the goal of the restriction.

In case there is an HTML injection vulnerability, disallowing using inline scripts should make the attacker's life more interesting.

In case there is a JavaScript injection in an inline script (between <script> .. </script> tags in an HTML document), an attacker can do a successful attack without communicating with any external party. Note, that also the nonce for the <script> tag does not help in this case.

First I though to go for recommendation direction:

To be able to get benefits from the Content-Security-Policy header fetch directive for script and style for disallowing using inline tags and eval for an attacker, the application should not use inline tags for script and style and should not allow 'unsafe-inline' and 'unsafe-eval' for script and style.

If to go for the requirement direction (instead of recommendation), it should carry ideas:

Verify that

  • the application does not use inline scripts and styles in HTML documents and has not allowed using those in the Content-Security-Policy header
  • OR all inline styles and scripts integrity are covered by checksum hashes in the Content-Security-Policy header

.. now, how to make it twice shorter :)

Feels like a level 3 requirement... or recommendation?

For the chapter, maybe this requirement should be together with "SRI" requirement and the current "V50.6 External Resource Integrity" to be renamed to "V50.6 Resource Integrity".

ping @Sjord @tw2as @tghosth - your ideas?

@elarlang elarlang added the V50 Group issues related to Web Frontend label May 14, 2024
@elarlang elarlang added the next meeting Filter for leaders label May 14, 2024
@jmanico
Copy link
Member

jmanico commented May 14, 2024

I politely disagree with this proposal. I do not want to focus on what NOT to do, but instead I would suggest we focus on what TO do. In this vein, I would like to build requirements for strict-csp as highlighted here #1311

@elarlang
Copy link
Collaborator Author

Positive requirement is up to wording fine-tuning, at the moment is important to validate the idea.

For strict-csp, please keep the discussion in "I opened the issue" #1311.

@tghosth tghosth added 1) Discussion ongoing Issue is opened and assigned but no clear proposal yet _5.0 - prep This needs to be addressed to prepare 5.0 labels May 19, 2024
@tghosth
Copy link
Collaborator

tghosth commented May 20, 2024

We had some proposals that I liked in #1311, I want to see where those go first. I want the CSP requirement to be as simple as possible and fall back on the CSP cheatsheet wherever possible.

@elarlang
Copy link
Collaborator Author

As mentioned issue #1311 is full of noise and childish downvotes to my comments, I keep my comments in this issue.

I politely and with all respect make it clear here, please do not comment before reading the entire comment or issue. If needed, I'm going to delete "noise-comments" to keep the issue clean and followable.


To the topic now.

There are so many options to make things (in)correctly and I think we should concentrate on addressing different attack vectors to mitigate, instead of only pointing to certain techniques the Content-Security-Policy header provides.

The focus - at the moment, everyone seems to only talk about JavaScript execution vectors. It is the most critical one, but it is not the only one.

For example, defense against UI Redress / ClickJacking requires one certain directive (frame-ancestors) and it is in a separate requirement.

The question now is - what are all the attack vectors we address with the Content-Security-Policy header. We need to list the vectors and then see for each vector, is using the Content-Security-Policy header with certain directives is the only option to solve it. With requirements, we need to eliminate or mitigate the attack vectors, not just setting directives.

Problems to avoid - we should not write hidden requirements with the Content-Security-Policy directives, e. g. asking to not use unsafe-inline means that using inline <script> .. </script> inside the HTML document is not allowed (without nonces and hashes).

We should not waste time discussing levels before requirement texts are set. As the Content-Security-Policy reporting requirement got in as a level 3 requirement (not as a recommendation), I think other related things should be aligned with that.


Attack vectors

  • Information leakage
    • When loading any external resource, it can be already considered as information leakage - who, when, using which browser, from what IP using the application.
    • Referrer leakage - if there is HTML injection, then it is possible to leak out referrer wihtout JavaScript execution and when in the HTTP response header Referrer-Policy is set to no-referrer
  • JavaScript execution
  • Embedding by 3rd party sites
    • Not limited to ClickJacking only
    • Already addressed by a separate requirement, so we don't need to cover this

Anything else?


Information leakage - for this we need to use allow-lists. Some may be allergic to that, but using allow-lists is not the opposite of using nonces or hashes, those can be used together. With allow-lists, we need to be sure, that nothing can be loaded from untrusted sources, not only scripts and styles.

Allow-lists - The main issue with allow-list is, that allowed-endpoints must allow only own-controlled and/or trusted end-points. With all related research with a "too-close-to-100%" bypass ratio the problem is on an allow-listed hostname are JSONP endpoints, shared scripts or gadgets etc.

This includes widespread problem - when end-users can upload files to allow-listed sphere, e. g. script-src: 'self' but users can upload files that are served from the same hostname. If there is no X-Content-Type-Options: nosniff set, it can be whatever content-type when serving the "javascript" file.

So, what to take into account is that you need to allow images from user-uploaded files, but can not allow script files. In this context, if the application validated images to be in the expected format and size, we can call it "trusted", e.g. an end-user can not create content with WYSIWYG or use HTML injection vulnerability to point thousands of images to external files with Gigabytes in size.

Mixing content and functionality - the main reason for JavaScript injection is mixing (untrusted) content and functionality

The pre-condition for using hashes - the content must be static.

The pre-condition for using nonces - the content must be trusted. If there is a JavaScript injection in inline <script>, then an attacker gets JavaScript execution from a "trusted zone".


Requirements

It takes some more time to reach the requirement text proposals. Extra arguments and edge-cases are more then welcome.

Personally I feel there is a need for 2 requirements (a already was proposed #1311 (comment) and #1311 (comment), but were lost in a noise a bit, although used as a base for the recommendations there):

  • One requirement for allow-listing sources to come and communication to go only to trusted sources and data, can be level 1 requirement. If someone is going to post anything about Google research and close to 100% bypass ratio, it means, this person did not read the issue.
    • Note that there are no pre-conditions to implement this
  • One requirement for keeping dynamic data and functionality separately and other structuring the output functionality to be able to use the benefits CSP provides
    • Note that this requirement is not directly CSP-oriented. This requirement is a pre-condition that one can use CSP-provided features.

ping @Sjord @ryarmst (as I have seen valuable fact-based feedback from you on the topic)

@ryarmst
Copy link
Contributor

ryarmst commented May 22, 2024

@elarlang I agree and I think it makes the most sense to approach by vector. As far as Attack Vectors, I would just expand "JavaScript execution" to a more general "malicious content execution or rendering" as there are directives preventing more than script execution.

@elarlang
Copy link
Collaborator Author

I would just expand "JavaScript execution" to a more general "malicious content execution or rendering" as there are directives preventing more than script execution.

I think it is quite JavaScript execution-driven. Do you any other examples? Mid-way could be "malicious scripts"?

What makes it false for me is that if there is an HTML injection, you can manipulate an HTML document visually and functionality and there is little to no help from Content-Security-Policy to stop that, but HTML syntax is malicious content in this context.

@ryarmst
Copy link
Contributor

ryarmst commented May 22, 2024

Good point. There is no CSP feature to prevent modification of HTML content, but it can prevent actions like the submission of forms to untrusted origins via HTML form hijacking. It can also block loading of various non-script resources, so there can be restrictions in the content that can be used to manipulate a document, though I think many of these are not particularly useful for mitigating real attacks.

There is also the sandbox directive which - in addition to limiting what resources can load/do - also isolates the origin. This would also fall under the vector category of "malicious scripts" but I think it would be more appropriate in specific contexts (like uploaded content/files rather than an injection issue).

@elarlang
Copy link
Collaborator Author

elarlang commented May 22, 2024

Good point. There is no CSP feature to prevent modification of HTML content, but it can prevent actions like the submission of forms to untrusted origins via HTML form hijacking. It can also block loading of various non-script resources, so there can be restrictions in the content that can be used to manipulate a document, though I think many of these are not particularly useful for mitigating real attacks.

That is addressed/addressable with the first proposed requirement - to allow loading resources and communication only with trusted parties, it includes things like form-action and base-uri.

There is also the sandbox directive which - in addition to limiting what resources can load/do - also isolates the origin. This would also fall under the vector category of "malicious scripts" but I think it would be more appropriate in specific contexts (like uploaded content/files rather than an injection issue).

Note, that for sandbox (which is again just one option to solve the vector), there is already requirement "V50.5 Unintended Content Interpretation"

# Description L1 L2 L3 CWE
50.5.3 [ADDED, DEPRECATES 14.4.2] Verify that security controls are in place to prevent browsers from rendering content or functionality in HTTP responses in an incorrect context (e.g., when an API or other resource is loaded directly). Possible controls could include: not serving the content unless headers indicate it is the correct context, Content-Security-Policy: sandbox, Content-Disposition: attachment, etc.

@jmanico
Copy link
Member

jmanico commented May 27, 2024

There are so many other ways to steal/exfiltrate data from an injected page that the CSP defenses to stop data leakage are mostly not important. The standard stopped trying to plug those holes. I suggest we just focus on XSS defense and not worry about trying to set standards that prevent data from being stolen once the injection occurs.

Good point. There is no CSP feature to prevent modification of HTML content, but it can prevent actions like the submission of forms to untrusted origins via HTML form hijacking. It can also block loading of various non-script resources, so there can be restrictions in the content that can be used to manipulate a document, though I think many of these are not particularly useful for mitigating real attacks.

There is also the sandbox directive which - in addition to limiting what resources can load/do - also isolates the origin. This would also fall under the vector category of "malicious scripts" but I think it would be more appropriate in specific contexts (like uploaded content/files rather than an injection issue).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1) Discussion ongoing Issue is opened and assigned but no clear proposal yet next meeting Filter for leaders V50 Group issues related to Web Frontend _5.0 - prep This needs to be addressed to prepare 5.0
Projects
None yet
Development

No branches or pull requests

4 participants