Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor imports for project and cost_center #314

Merged
merged 8 commits into from
Feb 29, 2024

Conversation

Hrishabh17
Copy link
Contributor

@Hrishabh17 Hrishabh17 commented Feb 29, 2024

Summary by CodeRabbit

  • New Features
    • Enhanced synchronization capabilities with new mappings for different types and their corresponding endpoints.
    • Improved error logging for synchronization processes.
    • Added functionality to determine auto-sync permissions based on workspace settings.
    • Introduced scheduling and deletion of import tasks based on mapping configurations.
  • Refactor
    • Streamlined task scheduling by replacing old functions with a new, more efficient approach.
    • Consolidated logic for handling source fields in mappings.
    • Updated integration actions to support new mappings and task logic.
  • Chores
    • Updated subproject commit reference to include the latest changes and improvements.

Hrishabh17 and others added 7 commits February 28, 2024 19:37
* Resource Cost_Center: refactored imports

* Refactor imports for Project resource (#311)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Project: added cluster import, fixed minor typo, added new supplier field
Copy link

coderabbitai bot commented Feb 29, 2024

Walkthrough

The recent update introduces enhancements to the synchronization process, focusing on refining the handling of various mapping types and exceptions. It also improves the logic for scheduling and deleting tasks related to Fyle integrations, alongside the introduction of new functions to better manage auto synchronization permissions and task construction. This overhaul simplifies the process, making it more efficient and adaptable to specific settings and configurations.

Changes

File Path Change Summary
.../constants.py Added new entries to SYNC_METHODS dictionary for endpoint mapping.
.../exceptions.py, .../helpers.py, .../queue.py, .../schedules.py, .../signals.py, .../tasks.py, .../triggers.py Enhanced exception handling, auto sync permission logic, task construction, and scheduling/deletion of tasks. Refactored and updated related functionalities.
.../actions.py Updated logic for new mappings and tasks, reflecting changes in SYNC_METHODS and ALLOWED_SOURCE_FIELDS.
fyle_integrations_imports Updated subproject commit reference.

"In the land of code and byte,
Where rabbits hop and keys take flight,
🌟 Changes bloom, bright and new,
Tasks align, in a queue so true.
Sync and map, with grace we thread,
Through fields of data, we boldly tread.
🐇 Hoppity hop, on paths we weave,
In the magic of code, we believe."

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 12

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between ef25328 and 9c450ca.
Files selected for processing (10)
  • apps/mappings/constants.py (1 hunks)
  • apps/mappings/exceptions.py (2 hunks)
  • apps/mappings/helpers.py (1 hunks)
  • apps/mappings/queue.py (2 hunks)
  • apps/mappings/schedules.py (1 hunks)
  • apps/mappings/signals.py (2 hunks)
  • apps/mappings/tasks.py (3 hunks)
  • apps/workspaces/apis/import_settings/triggers.py (4 hunks)
  • apps/xero/actions.py (3 hunks)
  • fyle_integrations_imports (1 hunks)
Additional comments: 17
fyle_integrations_imports (1)
  • 1-1: The subproject commit reference has been updated. Ensure that the new commit (a14efdaa0d14604f4b898325de3fb600bfa7b1a7) is stable and compatible with the current project setup.
apps/mappings/constants.py (1)
  • 47-53: New entries have been added to the SYNC_METHODS dictionary. Ensure that these new mappings are utilized correctly throughout the project and are compatible with any external services or APIs they interact with.
apps/mappings/schedules.py (1)
  • 8-48: The new_schedule_or_delete_fyle_import_tasks function has been added to schedule or delete Fyle import tasks based on specific conditions. Ensure that the conditions for scheduling and deleting tasks are comprehensive and cover all necessary scenarios. Also, verify that the deletion of tasks does not inadvertently remove necessary schedules.
apps/mappings/signals.py (1)
  • 43-53: The functionality related to scheduling tasks has been refactored to use new_schedule_or_delete_fyle_import_tasks, and the logic for handling different source fields has been consolidated using ALLOWED_SOURCE_FIELDS. Ensure thorough testing is conducted to verify that this refactoring does not introduce regressions or unintended behavior changes in task scheduling and handling of source fields.
apps/xero/actions.py (1)
  • 55-74: Updates to refersh_xero_dimension include handling new mappings for PROJECT and COST_CENTER and updates to task calls within the loop to reflect the new logic. Ensure the correct application of the is_auto_sync_allowed function and verify that all scenarios are correctly handled by the loop logic, including edge cases that could lead to incorrect behavior.
apps/workspaces/apis/import_settings/triggers.py (1)
  • 50-53: The functionality in triggers.py has been updated to use new_schedule_or_delete_fyle_import_tasks for workspace settings and mapping settings, replacing schedule_cost_centers_creation. Ensure that the new function is called with the correct parameters and that its behavior aligns with expectations. Also, verify that no specific scenarios handled by the previous function are overlooked.
apps/mappings/queue.py (1)
  • 105-152: The construct_tasks_and_chain_import_fields_to_fyle function has been added to construct tasks based on mapping settings and chain import fields to Fyle. Ensure that the task settings are comprehensive and correctly reflect the project's requirements. Also, verify that the chaining of tasks functions as expected in various scenarios, including edge cases that could lead to incorrect behavior.
apps/mappings/tasks.py (10)
  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [1-13]

The import statements and global variables are well-organized and clear. However, it's important to ensure that all imported modules and variables are used within the file. Unused imports should be removed to keep the code clean and maintainable.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [60-76]

The function resolve_expense_attribute_errors is concise and serves its purpose well. It's good to see the use of type annotations for clarity. Ensure that the database queries are optimized for performance, especially when dealing with large datasets.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [125-142]

The function get_all_categories_from_fyle fetches categories from Fyle and constructs a name map. It's a good practice to handle API pagination as done here. Ensure that error handling is robust around the API call to handle potential failures gracefully.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [200-229]

The decorator @handle_import_exceptions is used to handle exceptions for the auto_create_category_mappings function. Ensure that this decorator is thoroughly tested, especially for different types of exceptions that might occur during the mapping process. It's crucial that the application can gracefully handle and log errors.

Verification successful

The verification process has identified unit tests covering the functionalities decorated by the @handle_import_exceptions decorator, indicating that the decorator's exception handling capabilities are likely tested indirectly through these tests. This addresses the review comment concerning the thorough testing of the decorator for different types of exceptions that might occur during the mapping process.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Ensure the handle_import_exceptions decorator is covered by unit tests.
rg --type py "def test_handle_import_exceptions"

Length of output: 50



Script:

#!/bin/bash
# Find usage of the handle_import_exceptions decorator to identify which functions it decorates.
rg --type py "@handle_import_exceptions"

Length of output: 512


Script:

#!/bin/bash
# Placeholder for the next step, to be filled after identifying the functions decorated by handle_import_exceptions.
# This script will search for tests covering those functions.
# Example: rg --type py "def test_<function_name>"

Length of output: 0



Script:

#!/bin/bash
# Search for tests related to the functionalities mentioned in the decorator usage.
# Given the specific nature of the tasks, a broader search strategy is employed.
rg --type py "test_auto_create_category_mappings|test_async_auto_map_employees|test_auto_create_expense_fields_mappings|test_async_auto_create_custom_field_mappings|test_auto_create_tax_codes_mappings"

Length of output: 573


Script:

#!/bin/bash
# Additionally, search for tests that might indirectly cover the functionalities by mentioning key terms like "mappings" and "auto create" in the test names.
rg --type py "def test_.*(mappings|auto_create).*"

Length of output: 539

* 287-292: > 📝 **NOTE** > This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [231-253]

The function async_auto_map_employees demonstrates good use of external service connectors and settings from the database. Ensure that there's error handling around external service calls and database queries to handle any potential failures or exceptions.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [330-356]

The function upload_attributes_to_fyle uploads attributes to Fyle after removing duplicates and creating a custom field payload. It's clear and follows a logical flow. Ensure that the platform.expense_custom_fields.post call is wrapped in try-except blocks to handle potential API failures gracefully.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [358-376]

The function auto_create_expense_fields_mappings is concise and leverages other functions to perform its task. It's good to see the use of bulk operations for creating mappings. Ensure that there's sufficient logging and error handling around the bulk creation process.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [431-454]

The function auto_create_tax_codes_mappings is responsible for creating tax code mappings. It follows a clear process and makes good use of existing functions. Ensure that there's robust error handling around external service interactions and database operations.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [456-468]

The function auto_create_suppliers_as_merchants demonstrates a straightforward approach to creating suppliers as merchants. It's good to see the use of external connectors. Ensure that the post method call to the Fyle connector is wrapped in try-except blocks to handle potential API failures.

  • 287-292: > 📝 NOTE

This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [470-496]

The function auto_import_and_map_fyle_fields orchestrates the auto-import and mapping process. It's structured and makes good use of the Chain class for task management. Ensure that there's sufficient error handling and logging within each task to handle and track failures effectively.

Comment on lines 45 to 54
def is_auto_sync_allowed(workspace_general_settings: WorkspaceGeneralSettings, mapping_setting: MappingSetting = None):
"""
Get the auto sync permission
:return: bool
"""
is_auto_sync_status_allowed = False
if (mapping_setting and mapping_setting.destination_field == 'CUSTOMER' and mapping_setting.source_field == 'PROJECT') or workspace_general_settings.import_categories:
is_auto_sync_status_allowed = True

return is_auto_sync_status_allowed
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The is_auto_sync_allowed function has been added to determine auto-sync permission based on specific conditions. Consider enhancing the function's documentation to provide more details about its parameters and return value. Also, ensure that this function is called in the appropriate contexts and that its return value is handled correctly.

-    :return: bool
+    :param workspace_general_settings: WorkspaceGeneralSettings instance
+    :param mapping_setting: Optional MappingSetting instance
+    :return: bool indicating whether auto-sync is allowed

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def is_auto_sync_allowed(workspace_general_settings: WorkspaceGeneralSettings, mapping_setting: MappingSetting = None):
"""
Get the auto sync permission
:return: bool
"""
is_auto_sync_status_allowed = False
if (mapping_setting and mapping_setting.destination_field == 'CUSTOMER' and mapping_setting.source_field == 'PROJECT') or workspace_general_settings.import_categories:
is_auto_sync_status_allowed = True
return is_auto_sync_status_allowed
def is_auto_sync_allowed(workspace_general_settings: WorkspaceGeneralSettings, mapping_setting: MappingSetting = None):
"""
Get the auto sync permission
:param workspace_general_settings: WorkspaceGeneralSettings instance
:param mapping_setting: Optional MappingSetting instance
:return: bool indicating whether auto-sync is allowed
"""
is_auto_sync_status_allowed = False
if (mapping_setting and mapping_setting.destination_field == 'CUSTOMER' and mapping_setting.source_field == 'PROJECT') or workspace_general_settings.import_categories:
is_auto_sync_status_allowed = True
return is_auto_sync_status_allowed

Comment on lines 73 to 123
def handle_import_exceptions_v2(func):
def new_fn(expense_attribute_instance, *args):
import_log: ImportLog = args[0]
workspace_id = import_log.workspace_id
attribute_type = import_log.attribute_type
error = {
'task': 'Import {0} to Fyle and Auto Create Mappings'.format(attribute_type),
'workspace_id': workspace_id,
'message': None,
'response': None
}
try:
return func(expense_attribute_instance, *args)
except WrongParamsError as exception:
error['message'] = exception.message
error['response'] = exception.response
error['alert'] = True
import_log.status = 'FAILED'

except InvalidTokenError:
error['message'] = 'Invalid Token for fyle'
error['alert'] = False
import_log.status = 'FAILED'

except InternalServerError:
error['message'] = 'Internal server error while importing to Fyle'
error['alert'] = True
import_log.status = 'FAILED'

except (XeroWrongParamsError, XeroInvalidTokenError, XeroCredentials.DoesNotExist) as exception:
error['message'] = 'Invalid Token or Xero credentials does not exist workspace_id - {0}'.format(workspace_id)
error['alert'] = False
error['response'] = exception.__dict__
import_log.status = 'FAILED'

except Exception:
response = traceback.format_exc()
error['message'] = 'Something went wrong'
error['response'] = response
error['alert'] = False
import_log.status = 'FATAL'

if error['alert']:
logger.error(error)
else:
logger.info(error)

import_log.error_log = error
import_log.save()

return new_fn
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handle_import_exceptions_v2 function has been added to handle various exceptions and log errors accordingly. Consider enhancing the function's documentation to provide more details about its parameters and expected behavior. Also, ensure that all relevant exceptions are caught and that the error logging provides sufficient detail for troubleshooting.

-    :param expense_attribute_instance: Instance of the expense attribute being processed
+    :param expense_attribute_instance: Instance of the expense attribute being processed
+    :param args: Additional arguments, including an ImportLog instance
+    :return: The result of the function call if no exceptions are caught

Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def handle_import_exceptions_v2(func):
def new_fn(expense_attribute_instance, *args):
import_log: ImportLog = args[0]
workspace_id = import_log.workspace_id
attribute_type = import_log.attribute_type
error = {
'task': 'Import {0} to Fyle and Auto Create Mappings'.format(attribute_type),
'workspace_id': workspace_id,
'message': None,
'response': None
}
try:
return func(expense_attribute_instance, *args)
except WrongParamsError as exception:
error['message'] = exception.message
error['response'] = exception.response
error['alert'] = True
import_log.status = 'FAILED'
except InvalidTokenError:
error['message'] = 'Invalid Token for fyle'
error['alert'] = False
import_log.status = 'FAILED'
except InternalServerError:
error['message'] = 'Internal server error while importing to Fyle'
error['alert'] = True
import_log.status = 'FAILED'
except (XeroWrongParamsError, XeroInvalidTokenError, XeroCredentials.DoesNotExist) as exception:
error['message'] = 'Invalid Token or Xero credentials does not exist workspace_id - {0}'.format(workspace_id)
error['alert'] = False
error['response'] = exception.__dict__
import_log.status = 'FAILED'
except Exception:
response = traceback.format_exc()
error['message'] = 'Something went wrong'
error['response'] = response
error['alert'] = False
import_log.status = 'FATAL'
if error['alert']:
logger.error(error)
else:
logger.info(error)
import_log.error_log = error
import_log.save()
return new_fn
def handle_import_exceptions_v2(func):
"""
:param expense_attribute_instance: Instance of the expense attribute being processed
:param args: Additional arguments, including an ImportLog instance
:return: The result of the function call if no exceptions are caught
"""
def new_fn(expense_attribute_instance, *args):
import_log: ImportLog = args[0]
workspace_id = import_log.workspace_id
attribute_type = import_log.attribute_type
error = {
'task': 'Import {0} to Fyle and Auto Create Mappings'.format(attribute_type),
'workspace_id': workspace_id,
'message': None,
'response': None
}
try:
return func(expense_attribute_instance, *args)
except WrongParamsError as exception:
error['message'] = exception.message
error['response'] = exception.response
error['alert'] = True
import_log.status = 'FAILED'
except InvalidTokenError:
error['message'] = 'Invalid Token for fyle'
error['alert'] = False
import_log.status = 'FAILED'
except InternalServerError:
error['message'] = 'Internal server error while importing to Fyle'
error['alert'] = True
import_log.status = 'FAILED'
except (XeroWrongParamsError, XeroInvalidTokenError, XeroCredentials.DoesNotExist) as exception:
error['message'] = 'Invalid Token or Xero credentials does not exist workspace_id - {0}'.format(workspace_id)
error['alert'] = False
error['response'] = exception.__dict__
import_log.status = 'FAILED'
except Exception:
response = traceback.format_exc()
error['message'] = 'Something went wrong'
error['response'] = response
error['alert'] = False
import_log.status = 'FATAL'
if error['alert']:
logger.error(error)
else:
logger.info(error)
import_log.error_log = error
import_log.save()
return new_fn

Comment on lines 287 to 292
fyle_projects = ExpenseAttribute.objects.filter(
workspace_id=workspace_id, attribute_type=FyleAttributeEnum.PROJECT, auto_mapped=True
).order_by("value", "id")[offset:limit]

project_names = list(set(field.value for field in fyle_projects))

xero_customers = DestinationAttribute.objects.filter(
attribute_type=destination_field,
workspace_id=workspace_id,
value__in=project_names,
).values_list("value", flat=True)

for fyle_project in fyle_projects:
if fyle_project.value not in xero_customers:
fyle_project.active = False
fyle_project.save()
expense_attribute_to_be_disabled.append(fyle_project.id)

return expense_attribute_to_be_disabled


def post_projects_in_batches(
platform: PlatformConnector, workspace_id: int, destination_field: str
):
existing_project_names = ExpenseAttribute.objects.filter(
attribute_type=FyleAttributeEnum.PROJECT, workspace_id=workspace_id
).values_list("value", flat=True)
xero_attributes_count = DestinationAttribute.objects.filter(
attribute_type=destination_field, workspace_id=workspace_id
).count()
page_size = 200

for offset in range(0, xero_attributes_count, page_size):
limit = offset + page_size
paginated_xero_attributes = DestinationAttribute.objects.filter(
attribute_type=destination_field, workspace_id=workspace_id
).order_by("value", "id")[offset:limit]

paginated_xero_attributes = remove_duplicates(paginated_xero_attributes)

fyle_payload: List[Dict] = create_fyle_projects_payload(
paginated_xero_attributes, existing_project_names
)

if fyle_payload:
platform.projects.post_bulk(fyle_payload)
platform.projects.sync()

Mapping.bulk_create_mappings(
paginated_xero_attributes, FyleAttributeEnum.PROJECT, destination_field, workspace_id
)

if destination_field == "CUSTOMER":
expense_attribute_to_be_disable = disable_renamed_projects(
workspace_id, destination_field
)
project_ids_to_be_changed = disable_expense_attributes(
FyleAttributeEnum.PROJECT, "CUSTOMER", workspace_id
)
project_ids_to_be_changed.extend(expense_attribute_to_be_disable)
if project_ids_to_be_changed:
expense_attributes = ExpenseAttribute.objects.filter(
id__in=project_ids_to_be_changed
)
fyle_payload: List[Dict] = create_fyle_projects_payload(
projects=[],
existing_project_names=[],
updated_projects=expense_attributes,
)
platform.projects.post_bulk(fyle_payload)
platform.projects.sync()


@handle_import_exceptions(task_name="auto_create_project_mappings")
def auto_create_project_mappings(workspace_id: int):
"""
Create Project Mappings
:return: mappings
"""

fyle_credentials: FyleCredential = FyleCredential.objects.get(
workspace_id=workspace_id
)

platform = PlatformConnector(fyle_credentials)

platform.projects.sync()

mapping_setting = MappingSetting.objects.get(
source_field=FyleAttributeEnum.PROJECT, workspace_id=workspace_id
)

sync_xero_attributes(mapping_setting.destination_field, workspace_id)

post_projects_in_batches(platform, workspace_id, mapping_setting.destination_field)


def create_fyle_expense_custom_fields_payload(
xero_attributes: List[DestinationAttribute],
workspace_id: int,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [15-17]

The logger configuration is directly set within the file. Consider using a centralized logging configuration if not already in place. This approach allows for easier maintenance and consistency across different modules of the application.

- logger.level = logging.INFO
+ # It's recommended to configure logging levels and handlers in a centralized logging configuration file.

📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [19-37]

The DEFAULT_FYLE_CATEGORIES list is hardcoded. If these categories are static and unlikely to change, this approach is acceptable. However, if they might change or need to be customized per workspace or user, consider fetching them from a configuration file or database to enhance flexibility and maintainability.


📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [39-58]

The function disable_expense_attributes is well-structured and follows good practices. However, ensure that the filter variable name does not overshadow the built-in Python function filter. Using more descriptive variable names can improve readability and avoid potential confusion.

- filter = {"mapping__isnull": False, "mapping__destination_type": destination_field}
+ filter_criteria = {"mapping__isnull": False, "mapping__destination_type": destination_field}

📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [78-91]

The function remove_duplicates is a utility function that removes duplicate attributes based on their value. While the function is clear, consider if there's a more efficient way to achieve this, especially if xero_attributes can be large. Using a set or dictionary to track seen values might improve performance.

- attribute_values = []
+ attribute_values = set()
- if attribute.value.lower() not in attribute_values:
+ if attribute.value.lower() not in attribute_values:
-     attribute_values.append(attribute.value.lower())
+     attribute_values.add(attribute.value.lower())

📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [93-123]

The function create_fyle_categories_payload constructs a payload for category import. It's well-commented and structured. However, consider extracting the payload construction into a separate function or using a more declarative approach to improve readability and maintainability.


📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [144-198]

The function upload_categories_to_fyle is responsible for uploading categories to Fyle. It's comprehensive and includes logging, which is good for monitoring. However, this function seems to be doing multiple things - fetching categories from Xero, creating a payload, and uploading to Fyle. Consider breaking it down into smaller functions to adhere to the Single Responsibility Principle (SRP) for better maintainability and testing.


📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [255-269]

The function sync_xero_attributes is a straightforward utility function for syncing attributes from Xero. It's good to see clear separation based on the attribute type. Consider adding logging for each case to provide more visibility into the sync process.

+ logger.info(f"Syncing {xero_attribute_type} from Xero.")

📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [271-328]

The function create_fyle_expense_custom_fields_payload is well-documented and handles various scenarios for constructing the payload. However, the nested if-else structure could be simplified for better readability. Consider using a mapping or a separate function to determine the placeholder value.


📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [378-395]

The function async_auto_create_custom_field_mappings iterates over mapping settings to create custom field mappings. It's well-structured and makes good use of existing functions. Consider adding more detailed logging to provide insights into the mapping creation process.

+ logger.info(f"Creating custom field mappings for {mapping_setting.source_field}.")

📝 NOTE
This review was outside the diff hunks, and no overlapping diff hunk was found. Original lines [397-429]

The function upload_tax_groups_to_fyle and create_fyle_tax_group_payload work together to upload tax groups to Fyle. These functions are well-documented and structured. However, consider separating the payload creation logic into a different function or module to keep the code DRY and improve maintainability.

@Hrishabh17 Hrishabh17 changed the title Refactor imports Refactor imports for project and cost_center Feb 29, 2024
@Hrishabh17 Hrishabh17 merged commit 7c0469b into imports-refactor-master Feb 29, 2024
1 check passed
Hrishabh17 added a commit that referenced this pull request Mar 12, 2024
* Added integrations_imports submodule, made changes in settings (#310)

* Refactor imports for Project resource (#311)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Project: added cluster import, fixed minor typo, added new supplier field

* Refactor imports for project and cost_center (#314)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Project: added cluster import, fixed minor typo, added new supplier field

* Refactor imports costcenter (#312)

* Resource Cost_Center: refactored imports

* Refactor imports for Project resource (#311)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Project: added cluster import, fixed minor typo, added new supplier field

* Refactor imports custom expense (#313)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Cost_Center: refactored imports

* Resource Custom Expense Field: refactored imports

* Resource Project: added cluster import, fixed minor typo, added new supplier field

* Resource Tax Group: refactored imports (#315)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Cost_Center: refactored imports

* Resource Custom Expense Field: refactored imports

* Resource Project: added cluster import, fixed minor typo, added new supplier field

* Resource Tax Group: refactored imports

* Resource Tax Group: changed to is_auto_sync to False by default

* Resource Category: refactored imports (#316)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Cost_Center: refactored imports

* Resource Custom Expense Field: refactored imports

* Resource Tax Group: refactored imports

* Resource Category: refactored imports

* Resource Category: changed is_auto_sync_enabled to True by default

* Refactor imports supplier (#318)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Cost_Center: refactored imports

* Resource Custom Expense Field: refactored imports

* Resource Tax Group: refactored imports

* Resource Category: refactored imports

* Added RetryException handler and bumped the sdk versions (#306)

* Set tasks limit to 1L (#317)

* Set tasks limit to 1L

* set timeout

* Modified the import schedule creation condition

---------

Co-authored-by: ruuushhh <[email protected]>

* Refactor old flow (#321)

* Added integrations_imports submodule, made changes in settings

* Resource Project: Added new flow for the Project Import

* Typo Fix

* Resource Project: removed old import logic for Project

* Resource Cost_Center: refactored imports

* Resource Custom Expense Field: refactored imports

* Resource Tax Group: refactored imports

* Resource Category: refactored imports

* Added RetryException handler and bumped the sdk versions (#306)

* Set tasks limit to 1L (#317)

* Set tasks limit to 1L

* set timeout

* Modified the import schedule creation condition

* Removed dead code, add few test cases, added script for adding new schedule for different workspaces

* Added post deploy sql script to dump old schedules and delete

---------

Co-authored-by: ruuushhh <[email protected]>

* Merged changes from master

* Update delete workspace func (#319)

* Update delete workspace func

* Fix comments

* upate workspace last synced fix (#320)

* Modified the workflow for submodule

* Decreased the coverage to 88%

---------

Co-authored-by: ruuushhh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant