Fix python generation when custom files and templates are specified#9572
Fix python generation when custom files and templates are specified#9572spacether merged 2 commits intoOpenAPITools:masterfrom
Conversation
|
@taxpon @frol @mbohlool @cbornet @kenjones-cisco @tomplus @Jyhess @arun-nalla @spacether |
|
Fix for #9570 |
| if (userDefinedTemplates != null && !userDefinedTemplates.isEmpty()) { | ||
| Map<String, SupportingFile> supportingFilesMap = config.supportingFiles().stream() | ||
| .collect(Collectors.toMap(TemplateDefinition::getTemplateFile, Function.identity())); | ||
| .collect(Collectors.toMap(TemplateDefinition::getTemplateFile, Function.identity(), (oldValue, newValue) -> oldValue)); |
There was a problem hiding this comment.
What is this lambda doing? Why is it returning the oldValue?
I have no way to tell from this PR if this code works or not.
If this feature is changed in the future, we will not have any test that fails and the feature will break.
How about adding a test of processUserDefinedTemplates to ensure that this feature will keep working?
There was a problem hiding this comment.
@spacether Absolutely valid points.
Collectors.toMap throws IllegalStateException when dup keys are found, hence the logic is to return a single value when dups are encountered. oldValue is an existing key in the map, whereas newValue is a key found during later iterations of the list (in this case config.supportingFiles()). The code simply retains the existing key (oldValue) in the map when a dup (newValue) is encountered. Based on my testing, I have found oldValue to be user defined templates, which is desired to be retained in the map.
I will look into adding a test. Agreed there is no way to tell if the code works, how do you typically ensure PR quality?
There was a problem hiding this comment.
Thanks for explaining what this code is doing.
Most of our most basic tests are in: https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/test/java/org/openapitools/codegen/DefaultCodegenTest.java
We typically add tests that can avoid generating a new output client/server sample if we can. If you have to, then generating a new sample is okay too.
There was a problem hiding this comment.
@spacether Thank you for the pointers. I have added a test, but it looks like the config (line 684 of DefaultCodegenTest.java) I want to specify is not supported in CodegenConfigurator unless, I am missing something. Lmk if the test looks ok. I can remove the commented check based on your input.
There was a problem hiding this comment.
Thank you for explaining that and adding the test. Your test looks good.
spacether
left a comment
There was a problem hiding this comment.
Thank you for your PR. This looks good
PR checklist
This is important, as CI jobs will verify all generator outputs of your HEAD commit as it would merge with master.
These must match the expectations made by your contribution.
You may regenerate an individual generator by passing the relevant config(s) as an argument to the script, for example
./bin/generate-samples.sh bin/configs/java*.For Windows users, please run the script in Git BASH.
master,5.1.x,6.0.x