Add functions susceptible to AST stack overflow to calls blacklist#423
Add functions susceptible to AST stack overflow to calls blacklist#423naglis wants to merge 9 commits into
Conversation
|
I think you must create first the Issue. |
I'm not sure what you mean by this.
I think it's a fair addition to the list of rules. I don't think it will cause too much pain for existing consumers (beyond other PyCQA projects). I do wonder, however, if this is project is starting to be treated less as a list of security vulnerabilities and more as a "Well this could be used in a highly specialized case" stick to hit people with. |
@sigmavirus24, thanks for feedback. I can't comment on the intended scope of the project. I can see how this rule can introduce potential false positives, but that can be said about most other rules as well. My biggest issue is with |
sigmavirus24
left a comment
There was a problem hiding this comment.
I see. So what's confusing here is the differences between literal_eval and eval. The former isn't recommended over the latter due to security of parsing but security of execution if I remember correctly. Bandit grew out of OpenStack and OpenStack has large, unwieldy, and complex configuration files for setting up services. Sometimes those services want complex values for an option and resorted to Python data structures in the config. Instead of trying to manually parse them, some used eval but as the official Python docs explain, literal_eval is better:
This can be used for safely evaluating strings containing Python values from untrusted sources without the need to parse the values oneself. It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing.
So it's not so much that one is going to be doing
ast.literal_eval('log.warning(f"{GLOBAL_SECRET}")')But more
ast.literal_eval('{"foo": "bar", "biz": "baz"}')If we document "X is bad use Y" and then the user uses Y which causes us to then say "Y is bad, don't use that" we lose all credibility as a tool.
I think what's necessary here is this rule as you've written it, with the other functions included.
I think we also need to update the documentation for the existing eval() and exec() checks to explain that the recommendations are for safer but not safe methods. Everything is a shade of gray of course.
I think the tool should be consistent though. Along similar lines, I think the severity of our ast_overflow check should be Low. Given the typical use-cases I don't think there's significant security impact or severity.
| 'attacks. Consider using tmpfile() instead.' | ||
| )) | ||
|
|
||
| # omitted eval() and exec() as they are already covered by B307 and B102 |
There was a problem hiding this comment.
I think we should include them here. It's a different rule.
@sigmavirus24 I mean that maybe is not importat add rule for denial of services caused by known bug on python |
@ehooo then should we remove the checks for other standard libraries too? I think we do our best to warn folks about things in the language/standard library enough that this makes sense to add. That's just my 2 pence though. I think I definitely want to hear @ericwb's opinion on this as well at least. |
|
updated PR to lower rule severity to low, added both
Should this be covered in this PR? |
| @@ -0,0 +1,45 @@ | |||
| # -*- coding:utf-8 -*- | |||
| # | |||
| # Copyright 2018 Hewlett-Packard Development Company, L.P. | |||
There was a problem hiding this comment.
Don't need HP in the copyright (unless you work there and did this under HP's time and salary).
Haven't re-reviewed but a lot has happened.
This adds the following calls to the blacklist:
ast.literal_evalast.parsecompiledbm.dumb.openI did not add
evalandexecas they are already covered by B307 and B102, respectively, and I'm not sure if duplicating them would make sense.References: