-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix deployments for scenarios with dict editors after model reload #7123
base: staging
Are you sure you want to change the base?
Fix deployments for scenarios with dict editors after model reload #7123
Conversation
b72dc6d
to
0c12933
Compare
created: #7140 |
d462dec
to
34eac2c
Compare
...r/server/src/main/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentService.scala
Outdated
Show resolved
Hide resolved
extensions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/DeploymentData.scala
Outdated
Show resolved
Hide resolved
...pl/touk/nussknacker/engine/management/periodic/AdditionalDictComponentConfigsExtractor.scala
Outdated
Show resolved
Hide resolved
...pl/touk/nussknacker/engine/management/periodic/AdditionalDictComponentConfigsExtractor.scala
Outdated
Show resolved
Hide resolved
...pl/touk/nussknacker/engine/management/periodic/AdditionalDictComponentConfigsExtractor.scala
Outdated
Show resolved
Hide resolved
@@ -121,6 +123,24 @@ class EmbeddedDeploymentManager( | |||
} | |||
} | |||
|
|||
// We make sure that we don't let deploy a scenario when any parameter editor was modified to dictionary one by AdditionalUIConfigProvider | |||
// as that would result in failure during compilation before execution | |||
private def ensureAdditionalConfigsDoNotContainDictionaryEditors(deploymentData: DeploymentData): Unit = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
similar logic is encapsulated in AdditionalDictComponentConfigsExtractor. When it changes, we will have to remember that this place should be modified too. IMO it'd be nice to have it in one place
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I used extracted method from AdditionalComponentConfigsForRuntimeExtractor
so the logic is in one place
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, but I'm not sure if I understand why we are still focusing on the "dict" case here. Shouldn't we call getRequiredAdditionalConfigsForRuntime
and when we get some non-empty map, just fail as we fail now?
scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/ModelData.scala
Outdated
Show resolved
Hide resolved
WalkthroughThe changes in this pull request introduce various enhancements across multiple components in the Nussknacker engine. Key modifications include the addition of implicit encoders and decoders for JSON serialization in the Changes
Poem
Warning Rate limit exceeded@Elmacioro has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 14 minutes and 47 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
0142e57
to
b22d02a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🧹 Outside diff range and nitpick comments (29)
extensions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/AdditionalModelConfigs.scala (1)
6-9
: Consider adding documentation for serialization contract.While the
@JsonCodec
annotation correctly handles JSON serialization, it's important to document the serialization contract since this class is used for deployment data transfer.Add ScalaDoc explaining:
+/** + * Represents additional model configurations that need to be passed during deployment. + * This class is serialized to JSON when sending data to Flink. + * + * @param additionalConfigsFromProvider Map of component IDs to their additional configurations + */ @JsonCodec case class AdditionalModelConfigs( additionalConfigsFromProvider: Map[DesignerWideComponentId, ComponentAdditionalConfig] )common-api/src/main/scala/pl/touk/nussknacker/engine/api/parameter/ParameterName.scala (1)
10-16
: Consider adding serialization tests for deployment scenarios.Since these serialization changes are crucial for fixing deployment issues with dictionary editors, consider adding comprehensive tests that verify:
- Round-trip serialization/deserialization of parameters with dictionary configurations
- Correct handling of parameters in map contexts
- Edge cases like special characters in parameter names
Would you like me to help create a test suite for these scenarios?
extensions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/DeploymentData.scala (2)
Line range hint
5-11
: LGTM! Consider adding documentation.The addition of
additionalModelConfigs
is a clean solution for passing configuration data. Consider adding ScalaDoc to document the purpose and usage of this field, especially its relationship with dictionary editors and model reloading.@JsonCodec case class DeploymentData( + /** Deployment identifier */ deploymentId: DeploymentId, + /** User performing the deployment */ user: User, + /** Additional deployment-specific data */ additionalDeploymentData: Map[String, String], + /** Data specific to deployed nodes */ nodesData: NodesDeploymentData, + /** Additional model configurations (e.g., dictionary editor configs) that need to be passed during deployment */ additionalModelConfigs: AdditionalModelConfigs )
27-34
: Add return type annotation for clarity.The method implementation is correct, but adding a return type annotation would improve code documentation.
- def withDeploymentId(deploymentIdString: String) = + def withDeploymentId(deploymentIdString: String): DeploymentData = DeploymentData(designer/deployment-manager-api/src/main/scala/pl/touk/nussknacker/engine/DeploymentManagerDependencies.scala (1)
21-21
: Consider adding ScalaDoc for the new parameter.The parameter addition looks good and aligns with the PR's objective of passing additional configurations during deployment. However, it would be helpful to add ScalaDoc explaining:
- The purpose of these configurations
- When they are populated
- Their role in the deployment process
Here's a suggested documentation:
case class DeploymentManagerDependencies( deployedScenariosProvider: ProcessingTypeDeployedScenariosProvider, actionService: ProcessingTypeActionService, scenarioActivityManager: ScenarioActivityManager, executionContext: ExecutionContext, actorSystem: ActorSystem, sttpBackend: SttpBackend[Future, Any], + /** Additional component configurations collected during model reload. + * These configurations, particularly for dictionary editors, are passed to Flink + * during deployment to ensure consistent behavior between design and runtime. + */ configsFromProvider: Map[DesignerWideComponentId, ComponentAdditionalConfig] = Map.emptycomponents-api/src/main/scala/pl/touk/nussknacker/engine/api/component/AdditionalUIConfigProvider.scala (2)
Line range hint
28-35
: Consider adding URL validation for docsUrl field.While the case class structure is well-designed for configuration passing, consider adding validation for the
docsUrl
field to ensure it contains a valid URL format.Here's a suggested implementation:
import java.net.URL case class ComponentAdditionalConfig( parameterConfigs: Map[ParameterName, ParameterAdditionalUIConfig], icon: Option[String] = None, docsUrl: Option[String] = None, componentGroup: Option[ComponentGroupName] = None, disabled: Boolean = false ) { docsUrl.foreach { url => try { new URL(url) } catch { case _: Exception => throw new IllegalArgumentException(s"Invalid documentation URL: $url") } } }
Line range hint
37-44
: Add field documentation for better maintainability.The case class would benefit from ScalaDoc comments explaining the purpose and expected values of each field, especially for fields like
valueEditor
andvalueCompileTimeValidation
which have specific use cases.Here's a suggested documentation format:
/** * Configuration for additional UI parameters. * * @param required Whether the parameter must be provided * @param initialValue Default value for the parameter * @param hintText Helper text displayed in the UI * @param valueEditor Custom editor configuration (e.g., dict editor) * @param valueCompileTimeValidation Validation rules applied during compilation */ case class ParameterAdditionalUIConfig( required: Boolean, initialValue: Option[FixedExpressionValue], hintText: Option[String], valueEditor: Option[ParameterValueInput], valueCompileTimeValidation: Option[ParameterValueCompileTimeValidation] )designer/deployment-manager-api/src/main/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractor.scala (2)
8-13
: Consider enhancing method documentation for future maintainability.While the comment indicates the purpose (reducing data sent to Flink), it would be helpful to be more specific about what configurations are considered "required". Currently, it only returns configurations with dict editors, but the method name suggests it might handle other required configurations in the future.
Consider updating the comment to be more precise:
- // This is done to reduce data sent to Flink + // Filters configurations to only include dictionary editors to reduce data sent to Flink. + // Currently, only dictionary editors are considered required for runtime.
17-30
: Consider refactoring for improved readability and maintainability.While the implementation is functionally correct, it could be made more maintainable with a few adjustments:
Consider this refactoring to improve readability:
def getAdditionalConfigsWithDictParametersEditors( additionalComponentConfigs: Map[DesignerWideComponentId, ComponentAdditionalConfig] - ): Map[DesignerWideComponentId, ComponentAdditionalConfig] = additionalComponentConfigs + ): Map[DesignerWideComponentId, ComponentAdditionalConfig] = { + def hasDictEditor(config: ComponentAdditionalConfig.ParameterConfig) = + config.valueEditor.exists(_.isInstanceOf[ValueInputWithDictEditor]) + + additionalComponentConfigs .map { case (componentId, componentAdditionalConfig) => - val parametersWithDictEditors = componentAdditionalConfig.parameterConfigs.filter { - case (_, additionalUiConfig) => - additionalUiConfig.valueEditor match { - case Some(_: ValueInputWithDictEditor) => true - case _ => false - } - } + val parametersWithDictEditors = componentAdditionalConfig.parameterConfigs.filter { + case (_, config) => hasDictEditor(config) + } componentId -> componentAdditionalConfig.copy(parameterConfigs = parametersWithDictEditors) } .filter(_._2.parameterConfigs.nonEmpty) + }This refactoring:
- Extracts the dictionary editor check into a named function for better readability
- Simplifies the pattern matching
- Adds proper method formatting with braces
components-api/src/main/scala/pl/touk/nussknacker/engine/api/component/DesignerWideComponentId.scala (1)
Line range hint
6-8
: Consider documenting the migration planThe TODO comment indicates this class is temporary and might be replaced by
ComponentId
. Since we're adding serialization capabilities, this could make future refactoring more complex. Consider:
- Documenting the migration strategy
- Adding a timeline for the planned changes
- Noting any potential breaking changes that might affect serialized data
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/VerificationFlinkProcessCompilerDataFactory.scala (1)
20-21
: Consider adding documentation for the new parameter.To improve maintainability, consider adding a comment explaining that
additionalConfigsFromProvider
is specifically used for handling dictionary editor configurations during validation.new StubbedFlinkProcessCompilerDataFactory( process, modelData.configCreator, modelData.extractModelDefinitionFun, modelData.modelConfig, modelData.namingStrategy, + // Used for validating components with dictionary editors configured through AdditionalUiConfigProvider componentUseCase = ComponentUseCase.Validation, modelData.additionalConfigsFromProvider ) {
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMain.scala (1)
40-42
: Consider filtering dictionary-specific configurationsAccording to the PR objectives, only dictionary-related configurations should be passed to minimize data sent via HTTP. Consider adding filtering logic here.
- DeploymentData.empty.copy(additionalModelConfigs = - AdditionalModelConfigs(modelData.additionalConfigsFromProvider) - ), + DeploymentData.empty.copy(additionalModelConfigs = + AdditionalModelConfigs( + modelData.additionalConfigsFromProvider.filter(isDictionaryConfig) + ) + ),engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/runner/FlinkProcessMain.scala (1)
30-30
: Consider documenting the configuration filtering strategyWhile the implementation correctly passes additional configurations through DeploymentData, it would be valuable to document the filtering strategy that ensures only dictionary-related configurations are included. This helps maintain the optimization of minimizing HTTP payload size during deployment.
Consider adding a comment above this line explaining:
- What types of configurations are passed
- Why filtering is important for performance
- Where the filtering occurs in the pipeline
designer/deployment-manager-api/src/test/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractorTest.scala (3)
19-34
: Consider enhancing test coverage with additional test cases.While the current test case verifies the basic functionality, consider adding:
- Edge cases (empty map, null values)
- Case with multiple components having dictionary editors
- Negative assertions to verify components without dictionary editors are properly excluded
- Case with mixed dictionary editor configurations (some parameters with editors, some without)
Also, consider making the test name more specific, e.g., "should filter components retaining only parameters with dictionary editors and excluding components without any dictionary editors"
38-69
: Consider extracting magic strings to named constants.The parameter configurations use several magic strings. Consider extracting these to named constants for better maintainability:
+ private val InitialValueExpression = "'someInitialValueExpression'" + private val InitialValueLabel = "someInitialValueLabel" + private val DictAId = "someDictA" + private val DictBId = "someDictB" + private val HintText = "someHint" private val parameterAWithDictEditor = ( ParameterName("parameterA"), ParameterAdditionalUIConfig( required = true, - initialValue = Some(FixedExpressionValue("'someInitialValueExpression'", "someInitialValueLabel")), + initialValue = Some(FixedExpressionValue(InitialValueExpression, InitialValueLabel)), hintText = None, - valueEditor = Some(ValueInputWithDictEditor("someDictA", allowOtherValue = true)), + valueEditor = Some(ValueInputWithDictEditor(DictAId, allowOtherValue = true)), valueCompileTimeValidation = None ) )
1-99
: Test implementation aligns well with PR objectives.This test file effectively verifies the filtering of dictionary editor configurations, which is crucial for fixing the deployment issues mentioned in the PR objectives. The test data structure accurately represents the scenarios where
AdditionalUiConfigProvider
modifies components to use dictionary editors.Consider adding integration tests that verify the end-to-end flow:
- Model modification through
AdditionalUiConfigProvider
- Filtering of configurations
- Passing filtered data through
DeploymentData
- Successful deployment without hanging
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/StubbedFlinkProcessCompilerDataFactory.scala (1)
Line range hint
44-83
: Consider enhancing component processing with additional configurations.The
adjustDefinitions
method could potentially benefit from utilizing the newconfigsFromProviderWithDictionaryEditor
parameter when processing components, especially for components that might have dictionary editor configurations.Consider updating the component processing logic to incorporate additional configurations:
val processedComponents = originalModelDefinition.components.components.map { case source if usedSourceIds.contains(source.id) => + val additionalConfig = configsFromProviderWithDictionaryEditor.get(source.id) + prepareSourceFactory(source.withAdditionalConfig(additionalConfig), definitionContext) case service if service.componentType == ComponentType.Service => + val additionalConfig = configsFromProviderWithDictionaryEditor.get(service.id) + prepareService(service.withAdditionalConfig(additionalConfig), definitionContext) case other => other }engine/flink/management/periodic/src/main/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicDeploymentManager.scala (1)
Line range hint
93-101
: Add configsFromProvider to PeriodicDeploymentManager constructor.The
configsFromProvider
parameter is passed toPeriodicProcessService
but not stored in thePeriodicDeploymentManager
. This could lead to the configuration being lost if the manager needs to recreate the service.Apply this diff to fix the issue:
class PeriodicDeploymentManager private[periodic] ( val delegate: DeploymentManager, service: PeriodicProcessService, schedulePropertyExtractor: SchedulePropertyExtractor, customActionsProvider: PeriodicCustomActionsProvider, - toClose: () => Unit + toClose: () => Unit, + configsFromProvider: ConfigsFromProvider )(implicit val ec: ExecutionContext)scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/ModelData.scala (2)
Line range hint
54-66
: Update method documentation to reflect new parameter typeThe method comment describes the old parameter type (
Config
) but the method now acceptsModelConfigs
. Please update the documentation to reflect this change and explain the purpose of the additional configurations being passed.
107-110
: Add documentation for the ModelConfigs case classThis new case class effectively addresses the previous review comment about introducing a well-structured configuration class. However, it would benefit from ScalaDoc documentation explaining:
- The purpose of the class
- The role of each field
- Usage examples or scenarios
engine/lite/embeddedDeploymentManager/src/main/scala/pl/touk/nussknacker/engine/embedded/EmbeddedDeploymentManager.scala (1)
127-139
: Enhance error message with specific parameter detailsThe validation logic correctly prevents deployment of scenarios with dictionary editors, addressing the core issue. However, the error message could be more helpful by including details about which parameters were modified.
Consider enhancing the error message to include the affected parameters:
- throw new IllegalArgumentException( - "Parameter editor modification to ValueInputWithDictEditor by AdditionalUIConfigProvider is not supported for Lite engine" - ) + throw new IllegalArgumentException( + s"Parameter editor modification to ValueInputWithDictEditor by AdditionalUIConfigProvider is not supported for Lite engine. " + + s"Affected parameters: ${configsWithDictEditors.map(_.paramName).mkString(", ")}" + )designer/server/src/main/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentService.scala (1)
228-243
: Consider enhancing error handling and loggingWhile the implementation correctly handles additional configurations, consider these improvements:
- Add debug logging when configurations are missing or empty
- Consider adding validation for required configurations before deployment
private def createDeploymentData( deploymentId: LegacyDeploymentId, loggedUser: LoggedUser, nodesData: NodesDeploymentData, processingType: ProcessingType ) = { + val configs = additionalComponentConfigs.forProcessingType(processingType)(loggedUser).getOrElse(Map.empty) + logger.debug(s"Additional configurations for deployment [$deploymentId]: ${configs.keys.mkString(", ")}") DeploymentData( deploymentId = deploymentId, user = loggedUser.toManagerUser, additionalDeploymentData = Map.empty, nodesData = nodesData, additionalModelConfigs = AdditionalModelConfigs( AdditionalComponentConfigsForRuntimeExtractor.getRequiredAdditionalConfigsForRuntime( - additionalComponentConfigs.forProcessingType(processingType)(loggedUser).getOrElse(Map.empty) + configs ) ) ) }engine/flink/management/periodic/src/test/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessServiceTest.scala (1)
123-124
: Add test coverage for dictionary editor configurations.While the addition of the empty map parameter maintains backward compatibility, consider adding test cases that verify the handling of dictionary editor configurations through
AdditionalUiConfigProvider
. This would help ensure the fix for deployments with dict editors works as expected.Example test case to consider adding:
test("should handle dictionary editor configurations during deployment") { val f = new Fixture { override val periodicProcessService = new PeriodicProcessService( // ... other parameters ... Map("componentId" -> ComponentAdditionalConfig(Map("dictEditor" -> "someConfig"))) ) } // Test deployment with dictionary editor configuration // Verify the configuration is properly passed to deployment data }engine/flink/management/periodic/src/test/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicDeploymentManagerTest.scala (1)
77-78
: LGTM! Consider adding test cases for dict editor configurations.The addition of
ProcessingTypeActionServiceStub
and empty configuration map aligns with the PR objectives for handling dict editors. However, since this PR specifically addresses issues with dict editors after model reload, consider adding test cases that verify:
- Deployment behavior with non-empty dict editor configurations
- Model reload scenarios with modified dict editors
designer/server/src/test/scala/pl/touk/nussknacker/test/base/it/NuResourcesTest.scala (1)
Line range hint
4-4
: Address the TODO comment about migration toNuItTest
.The comment indicates that this trait will be removed in favor of
NuItTest
withNuScenarioConfigurationHelper
. To prevent technical debt accumulation, consider planning the migration.Would you like me to:
- Create a GitHub issue to track the migration task?
- Help generate examples of how to migrate existing tests to use
NuItTest
?engine/flink/management/periodic/src/main/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessService.scala (1)
Line range hint
182-182
: Consider implementing parallel deployment for different schedule names.The TODO comment indicates a potential enhancement to allow simultaneous deployment of scenarios with different schedule names. This could improve system throughput and reduce unnecessary serialization of independent deployments.
Would you like me to help design and implement this enhancement? I can:
- Create a detailed design proposal
- Open a GitHub issue to track this improvement
engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala (3)
666-684
: Use 'should include' for exception message assertionUsing
shouldBe
with the exact exception message may cause the test to be fragile if the message changes slightly. Consider usingshould include
for a more robust assertion.Apply this diff to modify the exception assertion:
- dictEditorException.getMessage shouldBe "DictKeyWithLabel expression can only be used with DictParameterEditor, got Some(DualParameterEditor(StringParameterEditor,RAW))" + dictEditorException.getMessage should include ("DictKeyWithLabel expression can only be used with DictParameterEditor")
686-720
: Enhance test by verifying the expected outputWhile the test checks that no exceptions are raised when the additional configuration is provided, it's beneficial to assert that the process produces the expected results to fully validate the functionality.
Consider adding assertions to verify the output received by the
valueMonitor
sink or other expected outcomes.
734-735
: Maintain parameter consistency in method signatureFor improved readability, consider placing
additionalConfigsFromProvider
beforeenrichDefaultConfig
in therunFlinkTest
method signature to align with the order in which they are used within the method.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (35)
common-api/src/main/scala/pl/touk/nussknacker/engine/api/parameter/ParameterName.scala
(1 hunks)components-api/src/main/scala/pl/touk/nussknacker/engine/api/component/AdditionalUIConfigProvider.scala
(3 hunks)components-api/src/main/scala/pl/touk/nussknacker/engine/api/component/DesignerWideComponentId.scala
(2 hunks)designer/deployment-manager-api/src/main/scala/pl/touk/nussknacker/engine/DeploymentManagerDependencies.scala
(2 hunks)designer/deployment-manager-api/src/main/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractor.scala
(1 hunks)designer/deployment-manager-api/src/test/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractorTest.scala
(1 hunks)designer/server/src/main/scala/pl/touk/nussknacker/ui/process/deployment/DefaultProcessingTypeDeployedScenariosProvider.scala
(2 hunks)designer/server/src/main/scala/pl/touk/nussknacker/ui/process/deployment/DeploymentService.scala
(3 hunks)designer/server/src/main/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentService.scala
(6 hunks)designer/server/src/main/scala/pl/touk/nussknacker/ui/server/AkkaHttpBasedRouteProvider.scala
(6 hunks)designer/server/src/test/scala/pl/touk/nussknacker/test/base/it/NuResourcesTest.scala
(1 hunks)designer/server/src/test/scala/pl/touk/nussknacker/test/utils/domain/TestFactory.scala
(2 hunks)designer/server/src/test/scala/pl/touk/nussknacker/ui/notifications/NotificationServiceTest.scala
(3 hunks)designer/server/src/test/scala/pl/touk/nussknacker/ui/process/deployment/DeploymentServiceSpec.scala
(1 hunks)designer/server/src/test/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentServiceTest.scala
(1 hunks)docs/Changelog.md
(1 hunks)engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/FlinkProcessCompilerDataFactory.scala
(4 hunks)engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/StubbedFlinkProcessCompilerDataFactory.scala
(2 hunks)engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/TestFlinkProcessCompilerDataFactory.scala
(1 hunks)engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/VerificationFlinkProcessCompilerDataFactory.scala
(1 hunks)engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/runner/FlinkProcessMain.scala
(2 hunks)engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMain.scala
(2 hunks)engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala
(3 hunks)engine/flink/management/periodic/src/main/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicDeploymentManager.scala
(1 hunks)engine/flink/management/periodic/src/main/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessService.scala
(4 hunks)engine/flink/management/periodic/src/test/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicDeploymentManagerTest.scala
(1 hunks)engine/flink/management/periodic/src/test/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessServiceIntegrationTest.scala
(1 hunks)engine/flink/management/periodic/src/test/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessServiceTest.scala
(1 hunks)engine/flink/management/src/test/scala/pl/touk/nussknacker/engine/management/FlinkRestManagerSpec.scala
(2 hunks)engine/lite/embeddedDeploymentManager/src/main/scala/pl/touk/nussknacker/engine/embedded/EmbeddedDeploymentManager.scala
(4 hunks)engine/lite/runtime-app/src/main/scala/pl/touk/nussknacker/engine/lite/app/RunnableScenarioInterpreterFactory.scala
(2 hunks)extensions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/AdditionalModelConfigs.scala
(1 hunks)extensions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/DeploymentData.scala
(1 hunks)scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/ModelData.scala
(3 hunks)utils/flink-components-testkit/src/main/scala/pl/touk/nussknacker/engine/flink/util/test/FlinkProcessCompilerDataFactoryWithTestComponents.scala
(3 hunks)
🔇 Additional comments (63)
extensions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/AdditionalModelConfigs.scala (2)
1-5
: LGTM! Clean and focused imports.
The imports are minimal and specific to the requirements. Good practice using the curly braces for multiple imports from the same package.
1-15
: Verify integration with deployment services.
The class looks well-designed for its purpose, but let's verify its integration with deployment services to ensure it properly addresses the deployment hanging issue mentioned in the PR objectives.
common-api/src/main/scala/pl/touk/nussknacker/engine/api/parameter/ParameterName.scala (2)
3-4
: LGTM! Appropriate use of Circe's semi-automatic derivation.
The imports are specific and well-chosen for JSON serialization capabilities.
11-15
: Implementation looks good. Verify usage in deployment context.
The JSON serialization implementation is correct and follows best practices:
- Using unwrapped derivation for the single-field case class
- Proper handling of map keys through string-based encoding
Let's verify how these encoders are used in the deployment context:
✅ Verification successful
Let me gather more information about the usage of these encoders in the deployment-related files we found.
Based on the gathered information, I can now provide the final response:
JSON encoders/decoders for ParameterName are correctly used in deployment context
The implementation is safe to use as:
ParameterName
is used consistently across deployment-related code for parameter identification and validation- The encoders are used in deployment API endpoints for request/response serialization
- Test coverage confirms proper handling of
ParameterName
in various scenarios including dictionary editors and validation errors
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for JSON serialization of ParameterName in deployment-related code
rg -l "ParameterName.*[encode|decode]" | grep -i "deploy"
# Search for potential JSON-related issues with ParameterName
rg "ParameterName.*json" -A 5
Length of output: 2735
Script:
#!/bin/bash
# Check deployment-related files for actual usage of encoders/decoders
rg -A 5 -B 5 "ParameterName" engine/lite/k8sDeploymentManager/src/main/scala/pl/touk/nussknacker/k8s/manager/RequestResponseScenarioValidator.scala
rg -A 5 -B 5 "ParameterName" engine/lite/embeddedDeploymentManager/src/main/scala/pl/touk/nussknacker/engine/embedded/EmbeddedRequestResponseScenarioValidator.scala
rg -A 5 -B 5 "ParameterName" designer/server/src/main/scala/pl/touk/nussknacker/ui/api/description/DeploymentApiEndpoints.scala
# Check for JSON-related imports in these files
rg "import.*json" engine/lite/k8sDeploymentManager/src/main/scala/pl/touk/nussknacker/k8s/manager/RequestResponseScenarioValidator.scala
rg "import.*json" engine/lite/embeddedDeploymentManager/src/main/scala/pl/touk/nussknacker/engine/embedded/EmbeddedRequestResponseScenarioValidator.scala
rg "import.*json" designer/server/src/main/scala/pl/touk/nussknacker/ui/api/description/DeploymentApiEndpoints.scala
# Check test file for encoder/decoder usage patterns
rg -A 5 -B 5 "ParameterName" designer/deployment-manager-api/src/test/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractorTest.scala
Length of output: 5902
extensions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/DeploymentData.scala (2)
19-25
: LGTM! Empty instance properly initialized.
The empty instance correctly initializes all fields, including the new additionalModelConfigs
field with its empty state.
11-11
: Verify AdditionalModelConfigs implementation.
Let's ensure the AdditionalModelConfigs
type is properly defined and handles dictionary editor configurations correctly.
✅ Verification successful
AdditionalModelConfigs implementation is properly defined and used correctly
The verification confirms that AdditionalModelConfigs
is properly implemented and used:
- It is a case class that wraps
additionalConfigsFromProvider: Map[DesignerWideComponentId, ComponentAdditionalConfig]
which is exactly what's needed for passing additional configuration through DeploymentData - It has proper JSON codecs and an empty instance helper
- It is used consistently throughout the codebase for deployment purposes, particularly in:
- Model configuration
- Deployment data preparation
- Test scenarios
- Periodic process service
- Flink execution
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for AdditionalModelConfigs definition and usage
echo "Searching for AdditionalModelConfigs definition:"
rg --type scala "case class AdditionalModelConfigs" -A 10
echo -e "\nSearching for AdditionalModelConfigs usage in deployment context:"
rg --type scala "additionalModelConfigs.*dictionary" -A 5
Length of output: 1483
Script:
#!/bin/bash
# Let's search for more context about ComponentAdditionalConfig and its usage
echo "Searching for ComponentAdditionalConfig definition:"
rg --type scala "ComponentAdditionalConfig" -A 5
echo -e "\nSearching for DesignerWideComponentId definition:"
rg --type scala "DesignerWideComponentId" -A 5
echo -e "\nSearching for usage of AdditionalModelConfigs in deployment:"
rg --type scala "AdditionalModelConfigs" -B 2 -A 2
Length of output: 278066
designer/deployment-manager-api/src/main/scala/pl/touk/nussknacker/engine/DeploymentManagerDependencies.scala (1)
9-9
: LGTM!
The import statement is clean and properly scoped for the new types needed.
components-api/src/main/scala/pl/touk/nussknacker/engine/api/component/AdditionalUIConfigProvider.scala (2)
Line range hint 3-26
: LGTM! Clear separation of concerns in configuration handling.
The trait design effectively separates configurations that require model reload from those that take effect immediately, which aligns well with the PR's objective of handling dynamic modifications properly.
Line range hint 28-44
: Verify integration with DeploymentData.
The new configuration classes look good and align with the PR objectives. Let's verify their integration with the deployment process.
✅ Verification successful
Based on the search results, I can now provide a final response regarding the integration of ComponentAdditionalConfig with DeploymentData:
Integration with DeploymentData is properly implemented.
The code shows that ComponentAdditionalConfig is correctly integrated with DeploymentData through the AdditionalModelConfigs class, which is a field in DeploymentData. The configuration flows through the following path:
- ComponentAdditionalConfig is collected from providers and passed via AdditionalModelConfigs
- AdditionalModelConfigs is included in DeploymentData
- DeploymentData is used during deployment to configure components with their additional UI settings
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the integration of ComponentAdditionalConfig with DeploymentData
# Check for DeploymentData usage of ComponentAdditionalConfig
echo "Checking DeploymentData integration..."
rg -t scala "DeploymentData.*ComponentAdditionalConfig" -A 5
# Check for dict editor handling
echo "Checking dict editor configuration handling..."
rg -t scala "dictEditor.*ParameterValueInput" -A 5
Length of output: 305
Script:
#!/bin/bash
# Let's try a broader search to understand the usage and integration
# Search for DeploymentData references
echo "Checking DeploymentData usage..."
rg -t scala "DeploymentData" -A 3
# Search for ComponentAdditionalConfig usage
echo "Checking ComponentAdditionalConfig usage..."
rg -t scala "ComponentAdditionalConfig" -A 3
# Search for ParameterValueInput references
echo "Checking ParameterValueInput usage..."
rg -t scala "ParameterValueInput" -A 3
# Search for files containing both DeploymentData and ComponentAdditionalConfig
echo "Checking files with both DeploymentData and ComponentAdditionalConfig..."
rg -l -t scala "DeploymentData" | xargs rg -l "ComponentAdditionalConfig"
Length of output: 212997
designer/deployment-manager-api/src/main/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractor.scala (2)
1-5
: LGTM! Clean and minimal imports.
The imports are specific and well-organized, following good practices by avoiding wildcard imports.
6-32
: Verify integration with deployment services.
The implementation aligns well with the PR objectives by filtering out unnecessary configurations before sending to Flink. However, to ensure robust deployment handling:
- Consider adding logging to track filtered configurations in production
- Consider adding metrics to monitor the size reduction of configuration data
- Ensure deployment services properly handle empty configuration maps
Let's verify the integration with deployment services:
components-api/src/main/scala/pl/touk/nussknacker/engine/api/component/DesignerWideComponentId.scala (2)
4-4
: LGTM: Import changes are appropriate
The added import for KeyEncoder
and KeyDecoder
is necessary and well-organized with other circe imports.
17-19
: LGTM: Key encoder/decoder implementation is sound
The implementation follows circe's best practices for key encoding/decoding and maintains consistency with the existing value semantics.
Let's verify the key encoding/decoding usage in the codebase:
✅ Verification successful
The search results show that DesignerWideComponentId
is extensively used as a key in various Map
data structures throughout the codebase, particularly in configuration-related contexts. Let's verify if there are any JSON encoding/decoding usages to ensure our implementation is sound.
LGTM: Key encoder/decoder implementation is properly integrated with existing JSON serialization
The implementation is verified to be sound and consistent with the codebase:
- The class already has standard circe encoders/decoders (
encoder
/decoder
) - The new key encoders/decoders complement the existing ones for JSON map serialization
- The implementation is actively used in configuration-related contexts throughout the codebase
- The value semantics (lowercase conversion) is consistently maintained across all encoders/decoders
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for usages of DesignerWideComponentId as a map key
# to ensure the new encoders/decoders will work as expected
# Search for Map types using DesignerWideComponentId
rg "Map\[DesignerWideComponentId" -A 2
# Search for potential JSON encoding/decoding of maps with DesignerWideComponentId
ast-grep --pattern 'encode($$$Map[DesignerWideComponentId]$$$)'
Length of output: 25175
Script:
#!/bin/bash
# Search for JSON encoding/decoding of maps with DesignerWideComponentId
rg "(?i)json.*map.*designerwidecomponentid" -A 2
# Search for circe imports in files using DesignerWideComponentId as map keys
rg -l "Map\[DesignerWideComponentId" | xargs rg "import io.circe"
# Search for actual JSON serialization/deserialization code
rg "(?i)(encode|decode).*designerwidecomponentid" -A 2
Length of output: 2853
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/VerificationFlinkProcessCompilerDataFactory.scala (1)
20-21
: LGTM! Changes align with PR objectives.
The addition of modelData.additionalConfigsFromProvider
properly enables the verification factory to handle dictionary editor configurations during model validation, which is a crucial part of fixing the deployment issues described in the PR.
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMain.scala (3)
11-11
: LGTM: Import statement is correctly placed
The addition of AdditionalModelConfigs
and DeploymentData
imports aligns with the PR's objective of handling additional configurations during deployment.
35-44
: LGTM: Implementation aligns with PR objectives
The changes correctly integrate additional configurations from the model provider into the deployment data, addressing the core issue described in the PR objectives.
35-44
: Verify test coverage for additional configurations
Ensure that test cases exist to verify:
- Behavior with empty additional configurations
- Behavior with dictionary editor configurations
- Error handling for invalid configurations
✅ Verification successful
Let me analyze the test coverage by examining the implementation of FlinkTestMainSpec.scala
more closely.
Let me check the specific test cases that verify the behavior with dictionary editor configurations.
Based on the test coverage analysis, I can now provide the final response:
Test coverage is adequate for additional configurations
The codebase has comprehensive test coverage for the changes:
- Tests verify error handling when dictionary editor configurations are mismatched
- Tests validate successful execution with properly configured AdditionalModelConfigs
- Tests check integration between AdditionalUiConfigProvider and dictionary editor parameters
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for relevant test cases in the codebase
echo "Searching for test files related to FlinkTestMain..."
rg -l "FlinkTestMain" --type scala --glob "*Spec.scala" --glob "*Test.scala"
echo "Searching for dictionary editor test cases..."
rg -A 5 "dictEditor|AdditionalModelConfigs" --type scala --glob "*Spec.scala"
Length of output: 5293
Script:
#!/bin/bash
# Get the content of FlinkTestMainSpec.scala to analyze test cases
echo "Content of FlinkTestMainSpec.scala:"
rg -A 20 "class FlinkTestMainSpec" engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala
echo "\nSearching for test cases related to AdditionalModelConfigs:"
ast-grep --pattern 'AdditionalModelConfigs($$$)' engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala
Length of output: 1404
Script:
#!/bin/bash
# Get the content of test cases related to dictionary editor and configurations
echo "Test cases for dictionary editor and configurations:"
rg -B 5 -A 15 "should run correctly when parameter was modified by AdditionalUiConfigProvider" engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala
echo "\nTest cases for error handling:"
rg -B 5 -A 15 "dictEditorException" engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala
Length of output: 2630
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/runner/FlinkProcessMain.scala (2)
7-7
: LGTM!
Clean addition of the required import for ModelConfigs.
30-30
: Verify the integration with DeploymentData
The change correctly implements the solution described in the PR objectives by passing additional configurations through DeploymentData to Flink execution. This should resolve the dictionary editor deployment issues.
Let's verify the integration:
✅ Verification successful
Based on the gathered information, I can now provide a final response:
Integration between DeploymentData and ModelData is correctly implemented
The verification confirms that:
- DeploymentData correctly includes additionalModelConfigs field of type AdditionalModelConfigs
- ModelData.duringFlinkExecution properly accepts ModelConfigs which contains the additional configurations
- The integration is used in both test and production code paths
- The change in FlinkProcessMain.scala correctly passes the additional configurations through the established pathway
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the integration between DeploymentData and ModelData
# We want to ensure:
# 1. DeploymentData contains additionalModelConfigs
# 2. ModelData properly handles these configs
# Check DeploymentData structure
ast-grep --pattern 'case class DeploymentData($$$, additionalModelConfigs: $_)'
# Check ModelData handling
ast-grep --pattern 'object ModelData {
$$$
def duringFlinkExecution(modelConfigs: ModelConfigs)
$$$
}'
# Verify test coverage
rg -A 5 "additionalModelConfigs.*dictionary"
Length of output: 239
Script:
#!/bin/bash
# Let's try a different approach to verify the integration
# Check DeploymentData class definition
rg -l "case class DeploymentData" --type scala
# Once we find the file, let's look at its content
rg -A 10 "case class DeploymentData" --type scala
# Check ModelData implementation
rg -l "object ModelData" --type scala
# Look at ModelData implementation details
rg -A 10 "object ModelData" --type scala
# Check for any tests related to this integration
rg -l "DeploymentData.*ModelData" --type scala
Length of output: 4553
Script:
#!/bin/bash
# Let's check the ModelData.duringFlinkExecution method and ModelConfigs usage
# Look for ModelConfigs definition and usage
rg -l "case class ModelConfigs" --type scala
# Check the implementation of duringFlinkExecution
rg -A 10 "def duringFlinkExecution" --type scala
# Check for AdditionalModelConfigs definition
rg -A 5 "AdditionalModelConfigs" --type scala
# Look for tests involving these configurations
rg -l "additionalModelConfigs.*ModelConfigs" --type scala
Length of output: 19695
engine/lite/runtime-app/src/main/scala/pl/touk/nussknacker/engine/lite/app/RunnableScenarioInterpreterFactory.scala (2)
7-7
: LGTM: Import statement correctly updated
The addition of ModelConfigs
to the import statement is necessary and follows Scala conventions.
28-28
: LGTM: ModelConfigs wrapper added correctly
The change to wrap modelConfig
in ModelConfigs
aligns with the PR's objective to improve configuration handling for dictionary editors. This modification ensures proper structure for additional configurations while maintaining backward compatibility.
Let's verify the ModelConfigs implementation:
✅ Verification successful
ModelConfigs usage is correctly implemented and consistent across the codebase
The verification confirms that ModelConfigs
is properly implemented in scenario-compiler
with support for both basic configuration (modelInputConfig
) and additional configurations (additionalModelConfigs
). The usage in RunnableScenarioInterpreterFactory
aligns with other implementations across the codebase, particularly in:
- Flink execution contexts (
FlinkProcessMain
,FlinkTestMain
) - Deployment services (
DeploymentService
) - Test scenarios (
FlinkTestMainSpec
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify ModelConfigs implementation and usage
# Expected: ModelConfigs should handle both basic config and additional dictionary configurations
# Check ModelConfigs implementation
rg -t scala "case class ModelConfigs" -A 10
# Verify other usages of ModelConfigs for consistency
rg -t scala "ModelConfigs\(" -A 2
Length of output: 6085
designer/deployment-manager-api/src/test/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractorTest.scala (2)
1-18
: LGTM! Imports are well-organized and complete.
The imports cover all necessary dependencies for testing and domain classes.
71-97
: Verify the impact of allowOtherValue parameter.
The test data includes dictionary editors with different allowOtherValue
settings (true/false), but the test doesn't verify how this affects the filtering behavior.
utils/flink-components-testkit/src/main/scala/pl/touk/nussknacker/engine/flink/util/test/FlinkProcessCompilerDataFactoryWithTestComponents.scala (4)
8-8
: LGTM: Import addition aligns with new configuration handling
The new import for ComponentAdditionalConfig and DesignerWideComponentId is necessary for the additional configuration handling being implemented.
Line range hint 69-70
: LGTM: Appropriate handling in test context
The use of an empty map for additionalConfigsFromProvider
is appropriate in this test utility context, as test components don't require additional configurations.
51-51
: LGTM: Dictionary editor configuration properly handled
The addition of configsFromProviderWithDictionaryEditor
and its propagation to the factory constructor implements the core functionality needed for fixing dictionary editor deployments.
Also applies to: 59-59
✅ Verification successful
Parameter addition verified and properly integrated
The verification confirms that:
- The
configsFromProviderWithDictionaryEditor
parameter is correctly defined in the mainFlinkProcessCompilerDataFactory
class - The parameter is consistently used across related classes including
FlinkProcessCompilerDataFactoryWithTestComponents
andStubbedFlinkProcessCompilerDataFactory
- The parameter type
Map[DesignerWideComponentId, ComponentAdditionalConfig]
matches in all implementations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that FlinkProcessCompilerDataFactory constructor accepts the new parameter
ast-grep --pattern 'class FlinkProcessCompilerDataFactory($$$, configsFromProviderWithDictionaryEditor: Map[DesignerWideComponentId, ComponentAdditionalConfig], $$$)'
Length of output: 168
Script:
#!/bin/bash
# Let's try to find the FlinkProcessCompilerDataFactory class definition first
rg "class FlinkProcessCompilerDataFactory" -A 10
# Also search for any usage of configsFromProviderWithDictionaryEditor
rg "configsFromProviderWithDictionaryEditor"
Length of output: 3073
39-40
: LGTM: Properly propagates additional configurations
The addition of modelData.additionalConfigsFromProvider
ensures that additional UI configurations are properly passed through to the factory, which is essential for the dictionary editor deployment fix.
✅ Verification successful
Based on the code inspection, I can now generate the final response:
Verified: additionalConfigsFromProvider is a valid field in ModelData
The addition of modelData.additionalConfigsFromProvider
is correct as the ModelData trait explicitly defines this field:
def additionalConfigsFromProvider: Map[DesignerWideComponentId, ComponentAdditionalConfig]
This field is used throughout the codebase to propagate additional UI configurations, which aligns with the PR's objective of fixing dictionary editor deployment.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that ModelData class has the additionalConfigsFromProvider field
ast-grep --pattern 'case class ModelData($$$, additionalConfigsFromProvider: $_, $$$)'
Length of output: 88
Script:
#!/bin/bash
# Let's try to find the ModelData class definition with a broader search
rg -l "ModelData"
# Then look for additionalConfigsFromProvider usage
rg "additionalConfigsFromProvider"
Length of output: 27805
Script:
#!/bin/bash
# Let's look at the ModelData class definition in detail
rg -A 20 "case class ModelData" scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/ModelData.scala
Length of output: 109
Script:
#!/bin/bash
# Let's try to find the ModelData class definition with a different approach
cat scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/ModelData.scala
Length of output: 11674
designer/server/src/main/scala/pl/touk/nussknacker/ui/process/deployment/DefaultProcessingTypeDeployedScenariosProvider.scala (2)
9-9
: LGTM!
The import of AdditionalModelConfigs
is correctly added and properly grouped with related deployment imports.
54-55
: Verify if empty configurations align with PR objectives
The use of NodesDeploymentData.empty
and AdditionalModelConfigs.empty
seems to contradict the PR objective of "passing the necessary additional configuration through DeploymentData". According to the PR description, we should be passing filtered dictionary-related configurations here.
Questions to consider:
- Should we be passing actual dictionary configurations instead of empty ones?
- Could this affect existing deployments that might rely on nodes deployment data?
Let's verify the usage of these configurations in other parts of the codebase:
✅ Verification successful
Let me gather more context about the dictionary-related configurations and their usage.
Let me check one more thing to understand how dictionary configurations are handled in the deployment process.
Based on the gathered information, I can now provide the final response:
Using empty configurations is correct for deployed scenarios retrieval
The use of NodesDeploymentData.empty
and AdditionalModelConfigs.empty
in DefaultProcessingTypeDeployedScenariosProvider
is correct because:
- This code is specifically for retrieving information about already deployed scenarios, not for new deployments
- The dictionary configurations are only needed during the actual deployment process, which is handled separately in
DeploymentService
where proper dictionary configurations are passed throughAdditionalComponentConfigsForRuntimeExtractor
- The PR's objective of "passing necessary additional configuration through DeploymentData" is implemented in the deployment flow, not in the scenarios retrieval flow
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other instances where DeploymentData is constructed
ast-grep --pattern 'DeploymentData($$$)'
# Search for references to dictionary configurations
rg -l "AdditionalModelConfigs"
Length of output: 14278
Script:
#!/bin/bash
# Check the implementation of AdditionalComponentConfigsForRuntimeExtractor
rg -A 10 "object AdditionalComponentConfigsForRuntimeExtractor"
# Look for dictionary configuration related code
rg -l "dictionary.*[cC]onfig"
# Check the PR file changes to understand the context better
git show --name-status
Length of output: 4456
Script:
#!/bin/bash
# Check the implementation of getAdditionalConfigsWithDictParametersEditors
rg -A 10 "getAdditionalConfigsWithDictParametersEditors"
# Look at the implementation of ValueInputWithDictEditor
ast-grep --pattern 'case class ValueInputWithDictEditor($$$)'
Length of output: 7404
designer/server/src/test/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentServiceTest.scala (1)
56-57
: LGTM! The additional parameter aligns with PR objectives.
The addition of TestFactory.additionalComponentConfigsByProcessingType
to the DeploymentService
constructor is correct and aligns with the PR's goal of fixing deployments for scenarios with dict editors.
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/TestFlinkProcessCompilerDataFactory.scala (1)
38-39
: Verify test coverage for additional configurations
The addition of modelData.additionalConfigsFromProvider
aligns with the PR's goal of fixing dictionary editor deployments. However, we should ensure proper test coverage.
Let's verify the test coverage:
✅ Verification successful
Based on the search results and the context from the review comment, I can now generate the final response:
Test coverage is adequate for additional configurations
The addition of modelData.additionalConfigsFromProvider
is properly covered through:
- Integration with
FlinkTestMain
which verifies the configuration passing through deployment data - Test implementations in
StubbedFlinkProcessCompilerDataFactoryTest
- The test implementation matches the production behavior as it uses the same configuration passing mechanism
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for test cases that verify the handling of additional configurations
# Look for test files that verify dictionary editor configurations are properly passed through
# Search for relevant test cases
rg -l "additionalConfigsFromProvider.*dictionary" --type scala
# Search for test scenarios involving dictionary editors
ast-grep --pattern 'test(".*dictionary.*") { $$$'
Length of output: 116
Script:
#!/bin/bash
# Let's try a broader search to understand the test coverage
# Search for test files related to dictionary configurations
rg -l "dictionary.*config" --type scala
# Search for usages of additionalConfigsFromProvider
rg "additionalConfigsFromProvider" --type scala -A 3
# Look for test classes that might verify this functionality
fd -e scala -e java ".*Test.*" --exec grep -l "dictionary"
Length of output: 39256
Script:
#!/bin/bash
# Let's check for test coverage specifically related to dictionary configurations and model reloading
# Search for test cases involving dictionary configurations
rg -A 5 "dictionary.*config.*test" --type scala
# Look for test cases involving model reloading
rg -A 5 "model.*reload.*test" --type scala
# Check for test implementations in the deployment manager
rg -A 5 "dictionary.*editor.*deploy" --type scala
Length of output: 146
Script:
#!/bin/bash
# Let's check the implementation and usage context more broadly
# Check for dictionary editor related code
rg "dictionaryEditor" --type scala -B 2 -A 2
# Look for AdditionalUiConfigProvider implementations
ast-grep --pattern 'class $_ extends AdditionalUIConfigProvider { $$$'
# Check for test implementations using TestFlinkProcessCompilerDataFactory
rg "TestFlinkProcessCompilerDataFactory" --type scala -B 2 -A 2
Length of output: 3353
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/StubbedFlinkProcessCompilerDataFactory.scala (2)
6-6
: LGTM: Import addition is appropriate.
The new import for component-related classes aligns with the changes needed to support additional UI configurations.
32-33
: Verify consistent usage of the new configuration parameter.
The addition of configsFromProviderWithDictionaryEditor
aligns with the PR objectives. Let's verify its usage pattern across the codebase.
Also applies to: 40-40
✅ Verification successful
Let me gather more information about the parameter propagation and usage.
Based on the verification results, I can now provide the final response:
Parameter usage is consistent across the codebase
The configsFromProviderWithDictionaryEditor
parameter is properly:
- Defined in the base
FlinkProcessCompilerDataFactory
class - Correctly propagated in
StubbedFlinkProcessCompilerDataFactory
- Consistently used in test utilities (
FlinkProcessCompilerDataFactoryWithTestComponents
)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify consistent usage of configsFromProviderWithDictionaryEditor parameter
# across related classes and ensure it's properly propagated.
# Check for parameter usage in related classes
rg -t scala "configsFromProviderWithDictionaryEditor.*Map\[DesignerWideComponentId, ComponentAdditionalConfig\]"
# Check for potential missing propagation
ast-grep --pattern 'class $_ extends FlinkProcessCompilerDataFactory {
$$$
}'
Length of output: 891
Script:
#!/bin/bash
# Check the implementation details of classes using this parameter
rg -A 10 -B 2 "configsFromProviderWithDictionaryEditor.*Map\[DesignerWideComponentId, ComponentAdditionalConfig\]"
# Check for any references to this parameter in method calls
rg "configsFromProviderWithDictionaryEditor"
# Look for the base class implementation
ast-grep --pattern 'class FlinkProcessCompilerDataFactory {
$$$
}'
Length of output: 7891
engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/compiler/FlinkProcessCompilerDataFactory.scala (3)
5-5
: LGTM!
The import statement is correctly added for the new types needed by the additional configuration parameter.
39-39
: LGTM! Verify configuration propagation.
The new parameter for dictionary editor configurations is correctly added to both constructors. The auxiliary constructor properly initializes it from ModelData.
Let's verify that the configuration is properly propagated through ModelData:
Also applies to: 51-51
✅ Verification successful
Based on the search results, I can see that additionalConfigsFromProvider
is properly propagated through the codebase:
- It originates from
ModelData
and flows through toFlinkProcessCompilerDataFactory
- The configuration is used consistently across the codebase for component definition extraction and validation
- The auxiliary constructor correctly initializes it from
modelData.additionalConfigsFromProvider
- The configuration is properly passed down to component definition extractors and validators
Configuration propagation verified successfully
The new parameter is correctly integrated into the configuration flow, with proper initialization and propagation through the component definition and validation pipeline.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if ModelData properly handles the additional configs
# Expected: Find references showing the configuration flow from ModelData to this class
rg -A 3 "additionalConfigsFromProvider"
Length of output: 38845
128-128
: LGTM! Verify model definition extraction.
The configuration is correctly passed to the model definition extraction process, which should resolve the deployment issues with dictionary editors.
Let's verify the model definition extraction process:
✅ Verification successful
Configuration is correctly passed to model definition extraction
The code search confirms that configsFromProviderWithDictionaryEditor
is correctly passed to the model definition extraction process. The implementation in ModelDefinitionFromConfigCreatorExtractor
and ModelDefinitionExtractor
both accept additionalConfigsFromProvider
as a parameter, which is used during the model definition extraction. The test stub StubModelDataWithModelDefinition
also shows this configuration being properly handled in the test environment.
The changes are consistent with the codebase structure and should properly enable dictionary editor configurations during deployment.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check how the model definition extraction uses the additional configs
# Expected: Find the extraction implementation to confirm proper handling
# Search for the extraction function implementation
ast-grep --pattern 'def extractModelDefinition($$$) = {
$$$
}'
# Search for tests verifying this behavior
rg -l "extractModelDefinition.*configsFromProvider" --type test
Length of output: 161
Script:
#!/bin/bash
# Let's try a broader search for the extraction implementation and tests
# Search for extractModelDefinition implementation
rg -A 10 "def extractModelDefinition"
# Search for tests with common test file extensions
rg -l "extractModelDefinition.*configsFromProvider" --type scala
Length of output: 6319
designer/server/src/test/scala/pl/touk/nussknacker/test/utils/domain/TestFactory.scala (1)
9-9
: LGTM: Import addition is appropriate
The addition of ComponentAdditionalConfig
to the existing component imports aligns with the new functionality being added.
designer/server/src/test/scala/pl/touk/nussknacker/ui/notifications/NotificationServiceTest.scala (2)
17-22
: LGTM! Import changes are well-organized.
The new imports for deployment-related classes are properly grouped and align with the changes needed for additional configuration support.
203-203
: Verify test coverage for additional component configurations.
While adding TestFactory.additionalComponentConfigsByProcessingType
is correct, we should ensure there are specific test cases that verify the behavior with dictionary editors, as mentioned in the PR objectives.
engine/flink/management/periodic/src/main/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicDeploymentManager.scala (1)
61-62
: LGTM! Addition of configsFromProvider aligns with PR objectives.
The changes correctly pass the additional configurations to PeriodicProcessService
, which is necessary for handling dictionary editor configurations during deployment.
Let's verify how configsFromProvider
is used in PeriodicProcessService
:
scenario-compiler/src/main/scala/pl/touk/nussknacker/engine/ModelData.scala (2)
Line range hint 74-98
: LGTM! Clean implementation of the new configuration handling
The changes effectively integrate the new ModelConfigs parameter while maintaining the existing functionality and backward compatibility. The implementation correctly uses both the model input config and additional configurations.
61-66
: Verify all callers are updated to use ModelConfigs
The signature changes in both duringFlinkExecution
and duringExecution
methods require updates in all calling code.
Also applies to: 74-78
✅ Verification successful
Let me verify the actual usage by checking the content of these files to ensure they're using the correct parameter.
All callers have been updated correctly to use ModelConfigs
The verification shows that all callers are properly using the new signature:
FlinkTestMainSpec.scala
usesModelConfigs(config, AdditionalModelConfigs(...))
FlinkProcessMain.scala
usesModelConfigs(config, deploymentData.additionalModelConfigs)
RunnableScenarioInterpreterFactory.scala
usesModelConfigs(modelConfig)
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all callers of these methods to ensure they've been updated
# Search for direct method calls
echo "Searching for duringFlinkExecution callers..."
rg "duringFlinkExecution\(" --type scala
echo "Searching for duringExecution callers..."
rg "duringExecution\(" --type scala
Length of output: 1153
Script:
#!/bin/bash
# Check the content of files that call these methods
echo "=== FlinkTestMainSpec.scala ==="
rg -A 5 "duringFlinkExecution\(" engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala
echo -e "\n=== FlinkProcessMain.scala ==="
rg -A 5 "duringFlinkExecution\(" engine/flink/executor/src/main/scala/pl/touk/nussknacker/engine/process/runner/FlinkProcessMain.scala
echo -e "\n=== RunnableScenarioInterpreterFactory.scala ==="
rg -A 5 "duringExecution\(" engine/lite/runtime-app/src/main/scala/pl/touk/nussknacker/engine/lite/app/RunnableScenarioInterpreterFactory.scala
Length of output: 1511
engine/lite/embeddedDeploymentManager/src/main/scala/pl/touk/nussknacker/engine/embedded/EmbeddedDeploymentManager.scala (2)
12-12
: LGTM: Required imports for dictionary editor validation
The new imports support the validation functionality for dictionary editors, aligning with the PR objectives.
Also applies to: 23-23
103-103
: LGTM: Early validation of dictionary editors
The validation is correctly placed before deployment to catch unsupported configurations early, preventing potential runtime issues.
designer/server/src/main/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentService.scala (3)
7-11
: LGTM: New imports and parameter additions align with requirements
The addition of component configuration types and the new additionalComponentConfigs
parameter properly support the handling of dictionary editors during deployment.
Also applies to: 56-59
169-173
: LGTM: Code duplication eliminated effectively
The extraction of common deployment data creation logic into a dedicated method improves maintainability and addresses previous review feedback about code duplication.
Also applies to: 202-206
238-242
: Verify dictionary configuration filtering
Ensure that getRequiredAdditionalConfigsForRuntime
properly filters configurations to include only dictionary-related ones, as mentioned in the PR objectives.
✅ Verification successful
Let me gather more information about the configuration filtering logic.
Let me check the test implementation to fully understand the filtering behavior.
Dictionary configuration filtering is properly implemented
The implementation in AdditionalComponentConfigsForRuntimeExtractor
correctly filters configurations to include only dictionary-related ones. Specifically:
- It filters parameters to keep only those with
ValueInputWithDictEditor
- It removes components that don't have any dictionary parameters
- Test cases verify this behavior with different parameter configurations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for dictionary-related filtering logic in AdditionalComponentConfigsForRuntimeExtractor
ast-grep --pattern 'def getRequiredAdditionalConfigsForRuntime($config) {
$$$
}'
# Search for dictionary configuration types
rg -A 5 'dictEditor|DictionaryConfig'
Length of output: 1372
Script:
#!/bin/bash
# Search for AdditionalComponentConfigsForRuntimeExtractor implementation
rg -l "AdditionalComponentConfigsForRuntimeExtractor"
# Once found, let's see its content
fd "AdditionalComponentConfigsForRuntimeExtractor.scala" --exec cat {}
# Search for any related test cases
rg -A 5 "AdditionalComponentConfigsForRuntimeExtractor.*Test"
Length of output: 5693
Script:
#!/bin/bash
# Get the full test implementation
cat designer/deployment-manager-api/src/test/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractorTest.scala
Length of output: 3511
engine/flink/management/periodic/src/test/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessServiceTest.scala (1)
Line range hint 1-450
: LGTM! Comprehensive test coverage maintained.
The existing test suite provides thorough coverage of various scenarios including deployment, scheduling, error handling, and status transitions. The changes maintain backward compatibility while supporting the new configuration parameter.
engine/flink/management/periodic/src/test/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessServiceIntegrationTest.scala (1)
136-137
: LGTM! The changes maintain backward compatibility.
The addition of ProcessingTypeActionServiceStub
and empty configuration map as default parameters is correct and maintains backward compatibility for existing test cases.
engine/flink/management/src/test/scala/pl/touk/nussknacker/engine/management/FlinkRestManagerSpec.scala (1)
22-28
: LGTM! Well-organized imports
The grouping of deployment-related imports improves code organization and readability.
designer/server/src/test/scala/pl/touk/nussknacker/test/base/it/NuResourcesTest.scala (2)
121-122
: LGTM! Changes align with PR objectives.
The addition of mapProcessingTypeDataProvider()
to DeploymentService
initialization enables passing additional configuration data, which is necessary for fixing deployments with dict editors.
Line range hint 449-454
: Investigate and fix the underlying cause of transaction serialization failures.
The FIXME comment indicates that using runInSerializableTransactionWithRetry
is a workaround for flaky tests. This suggests an underlying concurrency or transaction isolation issue that should be properly investigated and fixed.
Consider:
- Investigating the root cause of serialization failures
- Implementing proper transaction isolation levels
- Adding proper retry mechanisms with backoff if needed
- Adding monitoring for transaction failures in tests
engine/flink/management/periodic/src/main/scala/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessService.scala (1)
409-412
: LGTM! Implementation correctly handles additional configurations.
The changes appropriately address the deployment issues with dictionary editors by:
- Passing empty
NodesDeploymentData
- Including filtered additional configurations through
AdditionalModelConfigs
Let's verify the handling of dictionary editor configurations:
✅ Verification successful
Let me analyze the results and generate additional verification scripts to get a complete picture.
The first results show that AdditionalComponentConfigsForRuntimeExtractor
is used consistently across different deployment managers (Periodic, Embedded, and Designer). We need to verify its implementation and test cases to confirm it correctly handles dictionary editor configurations.
Implementation correctly filters and handles dictionary editor configurations
The verification confirms that the implementation is correct and safe. The AdditionalComponentConfigsForRuntimeExtractor
:
- Filters configurations to include only components with dictionary editors
- Preserves only parameters that use
ValueInputWithDictEditor
- Reduces data sent to Flink by removing unnecessary configuration
The test cases demonstrate proper handling of:
- Mixed parameters (with and without dict editors)
- Components without dict editors
- Preservation of required dict editor configurations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage of AdditionalComponentConfigsForRuntimeExtractor
# and its impact on dictionary editor configurations
# Test 1: Check for other usages of AdditionalComponentConfigsForRuntimeExtractor
rg "AdditionalComponentConfigsForRuntimeExtractor" -A 3
# Test 2: Check for dictionary editor related configurations
ast-grep --pattern 'dictEditor {
$$$
}'
Length of output: 9186
Script:
#!/bin/bash
# Let's check the implementation of AdditionalComponentConfigsForRuntimeExtractor
# to understand how dictionary editors are handled
# Check the implementation file
cat designer/deployment-manager-api/src/main/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractor.scala
# Check test cases to understand the behavior
cat designer/deployment-manager-api/src/test/scala/pl/touk/nussknacker/engine/util/AdditionalComponentConfigsForRuntimeExtractorTest.scala
Length of output: 5176
designer/server/src/main/scala/pl/touk/nussknacker/ui/server/AkkaHttpBasedRouteProvider.scala (3)
235-237
: LGTM: Additional component configs mapping
The implementation correctly maps processing types to their additional configurations, which is essential for fixing the dictionary editor deployment issues.
248-249
: LGTM: Consistent handling of additional configs in deployment services
The additional component configurations are consistently passed to both LegacyDeploymentService
and DeploymentService
, ensuring uniform behavior during deployments.
Also applies to: 446-447
716-716
: LGTM: Proper integration of additional configs in dependencies
The changes correctly integrate the additional UI config provider and its configurations into the deployment manager dependencies, which aligns with the PR's objective of fixing dictionary editor deployments.
Also applies to: 732-732, 752-753
designer/server/src/test/scala/pl/touk/nussknacker/ui/process/deployment/DeploymentServiceSpec.scala (1)
117-118
: Verify test coverage for additionalComponentConfigs parameter
The addition of additionalComponentConfigsByProcessingType
parameter aligns with the PR's objective of handling dictionary editor configurations. However, I notice that there are no explicit test cases verifying the behavior of this new parameter.
Let's verify if there are any test cases for this parameter in other test files:
✅ Verification successful
Let's examine the test files found to understand the test coverage better.
Let's check the TestFactory to understand how this parameter is configured in tests.
Let's check one final thing to understand how this configuration is used in tests.
Based on the gathered information, I can now provide the final response:
Test coverage for additionalComponentConfigs is sufficient
The parameter additionalComponentConfigsByProcessingType
is properly tested through:
- Integration in multiple test files including
DeploymentServiceTest
andNotificationServiceTest
- A well-defined test factory setup in
TestFactory.scala
usingmapProcessingTypeDataProvider
- Default test configuration provided through
TestFactory.additionalComponentConfigsByProcessingType
The implementation shows that it's using a test utility that provides empty combined data with unrestricted user access, which is appropriate for testing the deployment flow without specific component configurations, as the focus is on the deployment mechanism rather than specific component configurations.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for test cases related to additionalComponentConfigs
# Test: Look for test cases involving the new parameter
rg -l "additionalComponentConfigs.*Test|Test.*additionalComponentConfigs" --type scala
Length of output: 290
Script:
#!/bin/bash
# Check the test cases in the found files
echo "=== NotificationServiceTest.scala ==="
rg "additionalComponentConfigs" designer/server/src/test/scala/pl/touk/nussknacker/ui/notifications/NotificationServiceTest.scala -B 2 -A 5
echo -e "\n=== DeploymentServiceTest.scala ==="
rg "additionalComponentConfigs" designer/server/src/test/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentServiceTest.scala -B 2 -A 5
Length of output: 951
Script:
#!/bin/bash
# Check TestFactory implementation
echo "=== Looking for TestFactory definition ==="
fd TestFactory.scala --type f --exec rg -l "additionalComponentConfigsByProcessingType" {} \;
echo -e "\n=== TestFactory implementation ==="
fd TestFactory.scala --type f --exec rg "additionalComponentConfigsByProcessingType" -B 2 -A 2 {} \;
Length of output: 656
Script:
#!/bin/bash
# Look for test cases using mapProcessingTypeDataProvider
echo "=== Looking for mapProcessingTypeDataProvider usage ==="
rg "mapProcessingTypeDataProvider" ./designer/server/src/test/scala/pl/touk/nussknacker/test/utils/domain/TestFactory.scala -B 2 -A 5
# Also check for any test assertions related to component configs
echo -e "\n=== Looking for test assertions on component configs ==="
rg "assert.*[Cc]omponent.*[Cc]onfig" ./designer/server/src/test/scala/pl/touk/nussknacker/ui/process/deployment/DeploymentServiceSpec.scala -B 2 -A 2
Length of output: 3497
docs/Changelog.md (1)
14-14
: LGTM! Well-formatted changelog entries.
The new entries are properly formatted, include PR links, and provide clear descriptions of the changes.
engine/flink/executor/src/test/scala/pl/touk/nussknacker/engine/process/runner/FlinkTestMainSpec.scala (1)
23-31
: Imports are appropriate for additional configurations
The newly added imports are necessary for handling component additional configurations and enhance the functionality.
designer/server/src/main/scala/pl/touk/nussknacker/ui/process/deployment/DeploymentService.scala (4)
11-15
: Approved: New component imports are correctly added.
The imports for ComponentAdditionalConfig
, DesignerWideComponentId
, and NodesDeploymentData
are necessary for handling additional component configurations in the deployment process.
23-23
: Approved: Import of AdditionalComponentConfigsForRuntimeExtractor
is appropriate.
This import brings in the AdditionalComponentConfigsForRuntimeExtractor
, which is used to extract required additional configurations for runtime in the deployment data preparation.
63-66
: Approved: Constructor updated with additionalComponentConfigs
parameter.
Adding additionalComponentConfigs
to the DeploymentService
constructor aligns with the need to manage additional component configurations per processing type.
336-341
: Approved: Passing AdditionalModelConfigs
to DeploymentData
.
Including AdditionalModelConfigs
constructed from the required additional configurations ensures that necessary configuration data is available during deployment.
...nsions-api/src/main/scala/pl/touk/nussknacker/engine/deployment/AdditionalModelConfigs.scala
Show resolved
Hide resolved
...rver/src/test/scala/pl/touk/nussknacker/ui/process/newdeployment/DeploymentServiceTest.scala
Show resolved
Hide resolved
designer/server/src/test/scala/pl/touk/nussknacker/test/utils/domain/TestFactory.scala
Show resolved
Hide resolved
...ner/server/src/test/scala/pl/touk/nussknacker/ui/notifications/NotificationServiceTest.scala
Show resolved
Hide resolved
...a/pl/touk/nussknacker/engine/management/periodic/PeriodicProcessServiceIntegrationTest.scala
Show resolved
Hide resolved
...k/management/src/test/scala/pl/touk/nussknacker/engine/management/FlinkRestManagerSpec.scala
Show resolved
Hide resolved
@@ -121,6 +123,24 @@ class EmbeddedDeploymentManager( | |||
} | |||
} | |||
|
|||
// We make sure that we don't let deploy a scenario when any parameter editor was modified to dictionary one by AdditionalUIConfigProvider | |||
// as that would result in failure during compilation before execution | |||
private def ensureAdditionalConfigsDoNotContainDictionaryEditors(deploymentData: DeploymentData): Unit = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, but I'm not sure if I understand why we are still focusing on the "dict" case here. Shouldn't we call getRequiredAdditionalConfigsForRuntime
and when we get some non-empty map, just fail as we fail now?
|
||
// This function filters additional configs provided by AdditionalUIConfigProvider | ||
// to include only component and parameter configs with Dictionary editors. | ||
def getAdditionalConfigsWithDictParametersEditors( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO this should be private
nodesDeploymentData | ||
nodesDeploymentData, | ||
AdditionalModelConfigs( | ||
AdditionalComponentConfigsForRuntimeExtractor.getRequiredAdditionalConfigsForRuntime( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's move this to well-named private method
user = loggedUser.toManagerUser, | ||
additionalDeploymentData = Map.empty, | ||
nodesData = nodesData, | ||
additionalModelConfigs = AdditionalModelConfigs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's move this object creation to some private, well-named method
import pl.touk.nussknacker.engine.api.component.{ComponentAdditionalConfig, DesignerWideComponentId} | ||
|
||
@JsonCodec | ||
case class AdditionalModelConfigs( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
final case class
Describe your changes
Currently, we can modify components' editors using
AdditionalUiConfigProvider
by changing parameter's editor to dictEditor. When we do this dynamically and then reload the model everything looks fine in the designer.However, when we try to deploy such scenario then it hangs with the "The scenario is being deployed" status message and after a while we get
This happens because an exception is thrown on Flink job-manager in
ExpressionCompiler
.Although the parameter's editor type was modified by
AdditionalUiConfigProvider
this change is not visible on Flink as we did not pass the additional config data.At first, I wanted to call
AdditionalUiConfigProvider
from Flink inFlinkProcessMain
but this solution had several disadvantages.AdditionalUiConfigProvider
would be called twice. Once during model reload and second during scenario deployment. It would use different configs for on these steps. First one would use application config and second one would have to use model config.AdditionalUiConfigProvider
could dynamically change from the last reload.With this solution
AdditionalUiConfigProvider
is called only during model reload. We pass the additional config throughDeploymentData
inDeploymentService
andPeriodicProcessService
. Additionally I filter out additional configs to only have the dictionary ones as only they are required and I wanted to reduce the size of arguments passed to Flink by Http. I also tested that solution locally with our instance of Nussknacker and dynamic modifying byAdditionalUiConfigProvider
.Checklist before merge
Summary by CodeRabbit
New Features
AdditionalModelConfigs
for enhanced configuration management during deployments.ParameterName
andDesignerWideComponentId
to facilitate JSON serialization.ComponentAdditionalConfig
andParameterAdditionalUIConfig
for structured component configurations.DeploymentData
to includeadditionalModelConfigs
, improving deployment context handling.Bug Fixes
EmbeddedDeploymentManager
to prevent unsupported dictionary editor modifications.Documentation