Skip to content

[OSDEV-1158] API. Create POST endpoint for v1/production-locations to support SLC & API Mod.#407

Merged
vladsha-dev merged 34 commits intomainfrom
OSDEV-1158-api-create-post-endpoint-for-v1-production-locations-to-support-slc-and-api-mod
Dec 6, 2024
Merged

[OSDEV-1158] API. Create POST endpoint for v1/production-locations to support SLC & API Mod.#407
vladsha-dev merged 34 commits intomainfrom
OSDEV-1158-api-create-post-endpoint-for-v1-production-locations-to-support-slc-and-api-mod

Conversation

@vladsha-dev
Copy link
Contributor

@vladsha-dev vladsha-dev commented Nov 6, 2024

[OSDEV-1158]

  1. Introduced a new POST /api/v1/production-locations/ endpoint based on the API v1 specification. This endpoint allows the creation of a new moderation event for the production location creation with the given details.
  2. Removed redundant redefinition of paths via the as_view method for all the v1 API endpoints since they are already defined via DefaultRouter.
  3. The following updates to the ModerationEvent table have been made:
    1. Set uuid as the primary key.
    2. Make geocode_result field optional. It can be blank if lat and lng
      have been provided by user.
    3. Remove redundant blank=False and null=False constraints, as these are
      the default values for model fields in Django and do not need to be
      explicitly set.
    4. Make contributor field non-nullable, as the field should not be left
      empty. It is required to have information about the contributor.
    5. Allow claim field to be blank. This change reflects the fact that
      a moderation event may not always be related to a claim, so the field can
      be left empty.

@vladsha-dev vladsha-dev self-assigned this Nov 6, 2024
@vladsha-dev vladsha-dev marked this pull request as draft November 6, 2024 12:43
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:43 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:45 — with GitHub Actions Inactive
@barecheck
Copy link

barecheck bot commented Nov 6, 2024

React App | Jest test suite - Code coverage report

Total: 27.74%

Your code coverage diff: 0.00% ▴

✅ All code changes are covered

@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:50 — with GitHub Actions Inactive
@barecheck
Copy link

barecheck bot commented Nov 6, 2024

Dedupe Hub App | Unittest test suite - Code coverage report

Total: 56.14%

Your code coverage diff: 0.00% ▴

✅ All code changes are covered

@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:51 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 12:51 — with GitHub Actions Inactive
@barecheck
Copy link

barecheck bot commented Nov 6, 2024

Contricleaner App | Unittest test suite - Code coverage report

Total: 98.91%

Your code coverage diff: 0.00% ▴

✅ All code changes are covered

@barecheck
Copy link

barecheck bot commented Nov 6, 2024

Countries App | Unittest test suite - Code coverage report

Total: 100%

Your code coverage diff: 0.00% ▴

✅ All code changes are covered

@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 13:00 — with GitHub Actions Inactive
@barecheck
Copy link

barecheck bot commented Nov 6, 2024

Django App | Unittest test suite - Code coverage report

Total: 79.98%

Your code coverage diff: 0.42% ▴

✅ All code changes are covered

Base automatically changed from move-all-the-files-related-to-the-opensearch-service-to-the-existing-opensearch-folder to main November 6, 2024 13:56
@vladsha-dev vladsha-dev force-pushed the OSDEV-1158-api-create-post-endpoint-for-v1-production-locations-to-support-slc-and-api-mod branch from e5f546d to 9c46a51 Compare November 6, 2024 14:51
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 14:51 — with GitHub Actions Inactive
@vladsha-dev vladsha-dev temporarily deployed to Quality Environment November 6, 2024 14:51 — with GitHub Actions Inactive
@coderabbitai
Copy link

coderabbitai bot commented Dec 4, 2024

Warning

Rate limit exceeded

@vladsha-dev has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 3 minutes and 33 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 1cc6465 and 9ee130d.

📝 Walkthrough
📝 Walkthrough

Walkthrough

This pull request includes significant updates to the Open Supply Hub project, particularly focusing on the release version 1.26.0. Key changes involve database migrations and schema adjustments for the ModerationEvent table, the introduction of new API endpoints, and modifications to existing endpoints. Error handling has been standardized across the API, with changes to error response formats. Additionally, new classes and methods have been added to enhance event creation strategies and improve data processing. Overall, the changes reflect a comprehensive enhancement of the project's functionality and clarity.

Changes

File Change Summary
doc/release/RELEASE-NOTES.md Updated release notes for version 1.26.0, including database migrations, new endpoints, and breaking changes.
src/django/api/constants.py Renamed ErrorMessages to APIErrorMessages, added new error message classes, and introduced NON_FIELD_ERRORS_KEY.
src/django/api/facility_actions/processing_facility_api.py Updated import for error messages from ErrorMessages to APIErrorMessages.
src/django/api/migrations/0162_update_moderationevent_table_fields.py Migration script updating ModerationEvent table schema, including primary key changes and field constraints.
src/django/api/models/moderation_event.py Modified ModerationEvent model to reflect new field constraints and primary key settings.
src/django/api/moderation_event_actions/creation/dtos/create_moderation_event_dto.py Added CreateModerationEventDTO class for encapsulating moderation event creation data.
src/django/api/moderation_event_actions/creation/event_creation_strategy.py Introduced EventCreationStrategy abstract class for event creation strategies.
src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py Added LocationContribution class for handling location data in moderation events.
src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py Created ContributionProcessor class implementing Chain of Responsibility pattern for processing location contributions.
src/django/api/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py Added GeocodingProcessor class for geocoding location data.
src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py Introduced ProductionLocationDataProcessor for processing moderation events related to locations.
src/django/api/moderation_event_actions/creation/location_contribution/processors/source_processor.py Created SourceProcessor for validating and processing source data.
src/django/api/moderation_event_actions/creation/moderation_event_creator.py Added ModerationEventCreator class for creating moderation events based on strategies.
src/django/api/permissions.py Updated IsRegisteredAndConfirmed class with improved error messaging and simplified permission checks.
src/django/api/serializers/v1/contribution_moderation_event_source_field_serializer.py Introduced new serializer for validating source field in moderation events.
src/django/api/serializers/v1/moderation_events_serializer.py Updated error handling to use new error message constants.
src/django/api/tests/test_data.py Added new test data for handling geocoding results.
src/django/api/tests/test_location_contribution_strategy.py Introduced tests for LocationContribution strategy.
src/django/api/views/v1/moderation_events.py Renamed patch method to partial_update and updated method signatures for consistency with RESTful conventions.
src/django/oar/urls.py Updated URL configuration to use new URLNames for route management.

Possibly related PRs

Suggested reviewers

  • mazursasha1990
  • VadimKovalenkoSNF
  • roman-stolar

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Outside diff range and nitpick comments (52)
src/django/contricleaner/tests/test_row_required_fields_serializer.py (1)

59-65: LGTM! Good enhancement to error reporting.

The addition of the 'field' key to error dictionaries improves error traceability and aligns with REST API best practices for structured error responses. This change supports the PR's goal of standardizing error handling across the API.

Consider documenting these error response structures in your API documentation to help API consumers handle errors effectively.

src/django/contricleaner/tests/test_pre_header_validator.py (1)

Line range hint 11-45: Consider renaming test method for better clarity.

The current method name test_validate_header is quite generic. Consider a more descriptive name that indicates what scenarios are being tested, such as test_validate_header_required_fields_validation.

src/django/contricleaner/tests/test_row_sector_serializer.py (3)

86-86: LGTM! Consider documenting the error format.

The addition of the 'field' attribute to error messages improves error reporting clarity. This change aligns well with REST API best practices by providing more detailed error context.

Consider adding a comment block documenting the expected error message format:

# Error message format:
# {
#     'message': str,  # Human-readable error description
#     'field': str,    # Field that caused the validation error
#     'type': str      # Type of error (ValueError, ValidationError, etc.)
# }

Also applies to: 92-92, 98-98


121-121: LGTM! Consider extracting the max product types limit.

The error message format is consistent with other validation errors. However, the maximum limit of 50 product types appears to be a magic number used in multiple places.

Consider extracting this limit to a constant:

MAX_PRODUCT_TYPES = 50

# Then use it in both the test and error message:
self.assertEqual(
    result['errors'],
    [{
        'message': f'You may submit a maximum of {MAX_PRODUCT_TYPES} product types, not {product_type_count}.',
        'field': 'product_type',
        'type': 'ValidationError',
    }],
)

146-146: LGTM! Consider improving test data readability.

The error message format is consistent with other validation errors. However, the test data could be more readable.

Consider improving the test data readability:

# Extract the long sector name to a constant
LONG_SECTOR_NAME = 'Agriculture' * 10  # Creates a 100-char string

row = {
    'sector': [LONG_SECTOR_NAME]
}
src/django/contricleaner/tests/test_source_parser_csv.py (1)

162-162: LGTM! Enhanced error validation coverage.

The addition of error field validation improves the test's robustness. Consider extracting 'non_field_errors' to a constant for better maintainability.

Consider applying this change:

+ NON_FIELD_ERRORS = 'non_field_errors'
...
- expected_error_field = 'non_field_errors'
+ expected_error_field = NON_FIELD_ERRORS

Also applies to: 176-176, 180-180

src/django/contricleaner/tests/test_source_parser_xlsx.py (1)

121-128: Improved error message structure and clarity

The error message changes improve the API response by:

  • Adding field context through the 'field' key
  • Providing more specific validation feedback
  • Following a consistent error structure

Consider updating the API documentation to reflect this standardized error format.

src/django/contricleaner/lib/validators/pre_validators/pre_header_validator.py (2)

9-11: Fix inconsistent indentation in the required fields set

The continuation line is over-indented. It should align with the opening parenthesis of the previous line.

 __required_fields = {'name',
-                         'address',
-                         'country'}
+                    'address',
+                    'country'}

22-25: Consider using f-strings for better readability

While the error message formatting is correct, using an f-string would make it more readable.

-            'message': 'Required Fields are missing: {}.'
-            .format(', '.join(self.__required_fields)),
+            'message': f'Required Fields are missing: {", ".join(self.__required_fields)}.',
src/django/contricleaner/tests/test_contri_cleaner.py (1)

Line range hint 1-211: Consider test structure improvements

While the test coverage is comprehensive, consider these improvements:

  1. Move test data to fixtures for better maintainability
  2. Add edge cases for special characters in facility names and addresses
  3. Consider parameterized tests to reduce code duplication between CSV and XLSX tests
src/django/contricleaner/tests/test_composite_row_serializer.py (2)

46-46: Add test coverage for multiple facility types

While the current test case verifies basic serialization, we've lost coverage for the split pattern handling (r', |,|\|') with multiple facility types. Consider adding another test method to verify this functionality.

Example test case:

def test_get_serialized_row_multiple_facility_types(self):
    facility_source = {
        'facility_type': 'Blending|Knitting, Dyeing'
    }
    serialized_row = self.composite_row_serializer.validate(facility_source)
    self.assertEqual(
        serialized_row['fields']['facility_type']['processed_values'],
        ['Blending', 'Knitting', 'Dyeing']
    )

Also applies to: 56-57, 60-61


56-61: Consider documenting the relationship between facility_type and processing_type

The test reveals that processing_type always mirrors facility_type values. This tight coupling should be documented, and we should consider whether this duplication is intentional or if these fields serve different purposes.

Consider:

  1. Adding a comment explaining the relationship between these fields
  2. Evaluating if these fields should be merged if they always contain the same values
  3. Documenting any scenarios where these fields might diverge

Also applies to: 106-111

src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py (1)

34-48: Avoid double underscore prefix to prevent name mangling

The method __setup_location_data_processors is using double underscores, which triggers name mangling in Python. If the intent is to make it a non-public method accessible within subclasses, consider using a single underscore _setup_location_data_processors instead.

Apply this diff to adjust the method name:

-    def __setup_location_data_processors() -> ContributionProcessor:
+    def _setup_location_data_processors() -> ContributionProcessor:

And update the method call in serialize:

-        entry_location_data_processor = self.__setup_location_data_processors()
+        entry_location_data_processor = self._setup_location_data_processors()
src/django/api/views/v1/production_locations.py (1)

Line range hint 122-128: Standardize error message capitalization

In the error response, consider capitalizing "ID" for consistency with common conventions.

Apply this diff:

-                        "detail": "The location with the given id was not found.",
+                        "detail": "The location with the given ID was not found.",
src/django/api/tests/test_location_contribution_strategy.py (1)

85-176: Separate test cases into individual methods for clarity

The test_invalid_source_value_cannot_be_accepted method contains multiple test scenarios within a single method. For better readability and isolation of test cases, consider splitting them into separate methods.

src/django/api/moderation_event_actions/creation/dtos/create_moderation_event_dto.py (2)

9-19: Add type hints and documentation for all fields

The class structure is clean, but could benefit from additional documentation and type hints:

  • Add docstring explaining the purpose of the DTO
  • Document each field's purpose and constraints
  • Consider adding validation for request_type using Enum

Example enhancement:

 @dataclass
 class CreateModerationEventDTO:
+    """Data Transfer Object for creating moderation events.
+    
+    Contains all necessary data for creating and tracking a moderation event,
+    including both required and optional fields.
+    """
     contributor_id: int
+    """ID of the contributor initiating the moderation event."""
     raw_data: Dict
+    """Raw input data before processing."""
     request_type: str  # Consider using Enum
+    """Type of request being processed."""

14-18: Consider adding validation for the Dict fields

The Dict fields (cleaned_data, geocode_result, errors) could benefit from more specific type hints to ensure data consistency.

Consider using TypedDict or adding runtime validation:

from typing import TypedDict

class GeocodeResult(TypedDict, total=False):
    lat: float
    lng: float
    formatted_address: str

class CleanedData(TypedDict, total=False):
    # Add expected fields here
    pass
src/django/api/moderation_event_actions/creation/event_creation_strategy.py (1)

7-12: Enhance documentation for the strategy interface

While the abstract class is well-structured, it would benefit from more detailed documentation:

  • Add class-level docstring explaining the strategy pattern usage
  • Document expected behavior and error handling requirements

Example enhancement:

 class EventCreationStrategy(ABC):
+    """Abstract base class defining the interface for moderation event creation strategies.
+
+    Implementations of this strategy are responsible for serializing and validating
+    moderation event data before creation. Each strategy can implement custom
+    validation and processing logic while maintaining a consistent interface.
+    """
     @abstractmethod
     def serialize(
             self,
             event_dto: CreateModerationEventDTO) -> CreateModerationEventDTO:
+        """Process and validate the event data before creation.
+
+        Args:
+            event_dto: The DTO containing the event data to be processed.
+
+        Returns:
+            The processed DTO with validated and transformed data.
+
+        Raises:
+            ValidationError: If the event data is invalid.
+        """
         pass
src/django/contricleaner/lib/helpers/split_values.py (1)

5-6: Consider performance optimization for large datasets

The current implementation performs multiple conversions (str->set->list) which could be inefficient for large datasets.

Consider maintaining the list structure throughout:

-def split_values(value: Union[str, list, set],
-                 split_pattern: str) -> List[str]:
+def split_values(
+        value: Union[str, list, set],
+        split_pattern: str,
+        *,
+        remove_duplicates: bool = True
+) -> List[str]:
     # ...
-    return list(set(re.split(split_pattern, value)))
+    result = re.split(split_pattern, value)
+    return list(set(result)) if remove_duplicates else result

Also applies to: 16-16

src/django/contricleaner/lib/serializers/row_serializers/row_clean_field_serializer.py (1)

15-22: LGTM! Consider standardizing error message format.

The error message structure and content improvements are good. The addition of the 'field' key helps with error identification.

Consider using a constant for the error message template to maintain consistency across the codebase:

-                    'message': (
-                        '{} cannot consist solely of '
-                        'punctuation or whitespace.'
-                        ).format(self.field),
+                    'message': ERROR_MESSAGES['INVALID_FIELD_CONTENT'].format(
+                        field=self.field
+                    ),
src/django/api/serializers/v1/contribution_moderation_event_source_field_serializer.py (1)

18-32: Consider using f-strings consistently and improve error message structure.

The validation logic is solid, but the error message construction could be improved.

Consider this refactor:

-            stringified_source_values = ', '.join(source_values)
-            raise ValidationError(
-                (f'The source value should be one of the following: '
-                 f'{stringified_source_values}.')
-            )
+            raise ValidationError({
+                'source': f'Invalid source. Allowed values are: {", ".join(source_values)}'
+            })

This change:

  1. Uses consistent string formatting
  2. Follows DRF's error message structure
  3. Makes the error message more concise
src/django/contricleaner/tests/test_row_clean_field_serializer.py (1)

24-28: Consider adding more test cases for comprehensive validation.

While the current tests cover the basic scenarios, consider adding tests for:

  1. Different types of whitespace characters
  2. Pure punctuation input
  3. Mixed whitespace and punctuation

Example:

def test_validate_punctuation_only(self):
    punctuation_row = {"field": "!@#$%"}
    result = self.serializer.validate(punctuation_row, self.current.copy())
    self.assertEqual(len(result["errors"]), 1)
    self.assertEqual(
        result["errors"][0]["message"],
        "field cannot consist solely of punctuation or whitespace."
    )

def test_validate_mixed_whitespace_punctuation(self):
    mixed_row = {"field": " ! @ "}
    result = self.serializer.validate(mixed_row, self.current.copy())
    self.assertEqual(len(result["errors"]), 1)
src/django/api/moderation_event_actions/creation/moderation_event_creator.py (2)

9-12: Add docstring to describe the class purpose and strategy pattern usage.

The class implements the strategy pattern for moderation event creation, which should be documented for better maintainability.

 class ModerationEventCreator:
+    """
+    Creates moderation events using a provided creation strategy.
+    
+    This class implements the strategy pattern to allow different creation strategies
+    while maintaining a consistent interface for event creation.
+    """
     def __init__(self, strategy: EventCreationStrategy) -> None:
         self.__strategy = strategy

13-31: Consider improving error handling and transaction management.

The method could benefit from:

  1. Better error handling for database operations
  2. Transaction management to ensure atomicity
  3. Method documentation
     def perform_event_creation(
             self,
             event_dto: CreateModerationEventDTO
             ) -> CreateModerationEventDTO:
+        """
+        Creates a moderation event using the configured strategy.
+        
+        Args:
+            event_dto: Data transfer object containing event creation details
+            
+        Returns:
+            The input DTO updated with the created moderation event or errors
+        """
+        from django.db import transaction
+        
         processed_event = self.__strategy.serialize(event_dto)

         if processed_event.errors:
             return event_dto

-        event_dto.moderation_event = ModerationEvent.objects.create(
-            contributor=processed_event.contributor_id,
-            request_type=processed_event.request_type,
-            raw_data=processed_event.raw_data,
-            cleaned_data=processed_event.cleaned_data,
-            geocode_result=processed_event.geocode_result,
-            source=processed_event.source
-        )
+        try:
+            with transaction.atomic():
+                event_dto.moderation_event = ModerationEvent.objects.create(
+                    contributor=processed_event.contributor_id,
+                    request_type=processed_event.request_type,
+                    raw_data=processed_event.raw_data,
+                    cleaned_data=processed_event.cleaned_data,
+                    geocode_result=processed_event.geocode_result,
+                    source=processed_event.source
+                )
+        except Exception as e:
+            event_dto.errors.append({
+                "message": f"Failed to create moderation event: {str(e)}",
+                "type": "Error"
+            })
src/django/contricleaner/tests/test_split_values_function.py (1)

10-10: Consider adding edge cases while keeping the improved assertions.

The change to assertCountEqual is appropriate for unordered collections. Consider adding tests for edge cases:

  • Empty strings/lists/sets
  • None values
  • Invalid regex patterns
  • Unicode characters

Example additional test:

def test_edge_cases(self):
    # Empty inputs
    self.assertCountEqual(split_values("", ","), [])
    self.assertCountEqual(split_values([], ","), [])
    self.assertCountEqual(split_values(set(), ","), [])
    
    # Unicode characters
    self.assertCountEqual(
        split_values("café,résumé", ","),
        ["café", "résumé"]
    )

Also applies to: 15-15, 20-20, 28-38

src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py (1)

36-41: Consider enhancing error logging with more context.

The error logging could be more helpful for debugging by including the event_dto details (excluding sensitive data).

-        log.error(
-            ('[API V1 Location Upload] Internal Moderation Event Creation '
-             'Error: The non-existing next handler for processing the '
-             'moderation event creation related to location contribution was '
-             'called in the last handler of the chain.')
-        )
+        log.error(
+            ('[API V1 Location Upload] Internal Moderation Event Creation '
+             'Error: The non-existing next handler for processing the '
+             'moderation event creation related to location contribution was '
+             'called in the last handler of the chain. '
+             'Event DTO status: %s'), event_dto.status_code)
src/django/api/moderation_event_actions/creation/location_contribution/processors/source_processor.py (2)

16-33: Consider adding debug logging for validation failures.

While the error handling is good, adding debug logging would help with troubleshooting validation issues in production.

         if not serializer.is_valid():
+            log.debug(
+                '[API V1 Location Upload] Source validation failed: %s',
+                serializer.errors
+            )
             transformed_errors = self.__transform_serializer_errors(
                 serializer.errors
             )

35-51: Add type hints for dictionary values.

The type hint for the dictionary could be more specific to better document the expected structure.

-    def __transform_serializer_errors(serializer_errors: Dict) -> Dict:
+    def __transform_serializer_errors(
+            serializer_errors: Dict[str, List[str]]
+    ) -> Dict[str, Union[str, List[Dict[str, str]]]]:
src/django/contricleaner/lib/serializers/row_serializers/row_required_fields_serializer.py (3)

15-15: Extract magic number 200 into a constant.

The maximum field length of 200 should be defined as a class constant for better maintainability.

+    __MAX_FIELD_LENGTH = 200
-    __valid_field_value_lengths = {'name': 200, 'address': 200}
+    __valid_field_value_lengths = {
+        'name': __MAX_FIELD_LENGTH,
+        'address': __MAX_FIELD_LENGTH
+    }

Line range hint 39-39: Fix indentation inconsistency.

The method's indentation doesn't match the class's style (8 spaces instead of 4).

-    def __validate_field_value_lengths(self,
-                                       missing_fields: Set[str],
-                                       input_row: Dict) -> List[Dict]:
+    def __validate_field_value_lengths(
+            self,
+            missing_fields: Set[str],
+            input_row: Dict) -> List[Dict]:

Line range hint 47-55: Consider using f-strings for better readability.

The string formatting could be simplified using f-strings.

-                            'message': ('There is a problem with the {0}: '
-                                        'Ensure this value has at most 200 '
-                                        'characters. (it has {1})').format(
-                                            field, value_len),
+                            'message': (
+                                f'There is a problem with the {field}: '
+                                f'Ensure this value has at most {max_length} '
+                                f'characters. (it has {value_len})'
+                            ),
src/django/contricleaner/lib/serializers/row_serializers/row_facility_type_serializer.py (2)

41-45: Error message structure improvement looks good, but consider refactoring the validation logic.

While the error message formatting changes are good, the validation method could benefit from some refactoring to improve maintainability.

Consider extracting the validation logic into smaller, focused methods:

class RowFacilityTypeSerializer(RowSerializer):
+    def _validate_field(self, field: str, value: any) -> dict:
+        if value and not is_valid_type(value):
+            return {
+                'message': f'Expected value for {field} to be a string or a list of strings but got {value}.',
+                'field': field,
+                'type': 'ValueError',
+            }
+        return None

    def validate(self, row: dict, current: dict) -> dict:
        facility_type = row.get('facility_type')
        processing_type = row.get('processing_type')
        facility_type_processing_type = row.get('facility_type_processing_type')

        if not any((facility_type, processing_type, facility_type_processing_type)):
            return current

        facility_type_errors = []
        fields = ['facility_type', 'processing_type', 'facility_type_processing_type']
        values = [facility_type, processing_type, facility_type_processing_type]
        
        for field, value in zip(fields, values):
-            if value and not is_valid_type(value):
-                facility_type_errors.append({
-                    'message': 'Expected value for {} to be a string or a list of strings but got {}.'.format(field, value),
-                    'field': field,
-                    'type': 'ValueError',
-                })
+            error = self._validate_field(field, value)
+            if error:
+                facility_type_errors.append(error)

50-50: Consider using list comprehension for error collection.

The error collection could be more Pythonic.

-        facility_type_errors = []
-        for field, value in zip(fields, values):
-            if value and not is_valid_type(value):
-                facility_type_errors.append(...)
-        if facility_type_errors:
-            current['errors'].extend(facility_type_errors)
+        errors = [self._validate_field(f, v) for f, v in zip(fields, values)]
+        errors = [e for e in errors if e is not None]
+        if errors:
+            current['errors'].extend(errors)
src/django/api/models/moderation_event.py (1)

Line range hint 103-108: Consider adding validation for geocode_result

While making geocode_result optional is valid, consider adding validation to ensure that when it's blank, latitude and longitude are indeed provided in the cleaned_data.

 geocode_result = models.JSONField(
     blank=True,
     default=dict,
     help_text=(
         'Result of the geocode operation.'
     )
 )
+
+def clean(self):
+    super().clean()
+    if not self.geocode_result:
+        if not (self.cleaned_data.get('lat') and self.cleaned_data.get('lng')):
+            raise ValidationError(
+                'Either geocode_result or lat/lng must be provided'
+            )
src/django/api/serializers/v1/moderation_events_serializer.py (1)

88-92: Consider adding error logging

The comment [OSDEV-1441] suggests adding error logging to Rollbar. Consider implementing this now to avoid technical debt.

 if errors:
-    # [OSDEV-1441] Pass error msg to the Rollbar here
+    logger.error(
+        "Validation errors in ModerationEventsSerializer",
+        extra={"errors": errors}
+    )
     raise ValidationError({
         "detail": APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR,
         "errors": errors
     })
src/django/contricleaner/lib/contri_cleaner.py (1)

58-60: Consider adding type hints to error dictionary

While the error handling enhancement is good, consider adding type hints to improve code maintainability.

- return ListDTO(errors=[{
+ from typing import TypedDict
+ 
+ class ParsingErrorDict(TypedDict):
+     message: str
+     field: str
+     type: str
+
+ return ListDTO(errors=[ParsingErrorDict(
      'message': str(err),
      'field': NON_FIELD_ERRORS_KEY,
      'type': 'ParsingError',
- }])
+ )])
src/django/contricleaner/lib/serializers/row_serializers/row_sector_serializer.py (3)

37-42: Extract error message format to a constant

Consider extracting the error message format string to a class constant for better maintainability.

+ __INVALID_TYPE_ERROR_FORMAT = 'Expected value for {} to be a string or a list of strings but got {}.'

  def validate(self, row: dict, current: dict) -> dict:
      # ...
      sector_errors.append({
-         'message': 'Expected value for {} to be a string or a list of strings but got {}.'.format(
+         'message': self.__INVALID_TYPE_ERROR_FORMAT.format(
              field, value
          ),
          'field': field,
          'type': 'ValueError',
      })

57-64: Consider creating an ErrorBuilder utility class

The error dictionary structure is repeated across methods. Consider creating a utility class to standardize error creation.

+ class ValidationErrorBuilder:
+     @staticmethod
+     def build(message: str, field: str, error_type: str) -> dict:
+         return {
+             'message': message,
+             'field': field,
+             'type': error_type
+         }

  # Usage:
  current['errors'].append(
-     {
-         'message': 'You may submit a maximum of {} product types, not {}.'.format(
-             MAX_PRODUCT_TYPE_COUNT, len(product_types)
-         ),
-         'field': 'product_type',
-         'type': 'ValidationError',
-     }
+     ValidationErrorBuilder.build(
+         message=f'You may submit a maximum of {MAX_PRODUCT_TYPE_COUNT} product types, not {len(product_types)}.',
+         field='product_type',
+         error_type='ValidationError'
+     )
  )

109-111: Add docstring to private validation method

The __validate_sector_value_lengths method would benefit from a docstring explaining its purpose and return value.

  def __validate_sector_value_lengths(self, sectors: List) -> List:
+     """
+     Validates that all sector values are within the maximum length limit.
+     
+     Args:
+         sectors: List of sector values to validate
+         
+     Returns:
+         List of validation errors if any sectors exceed the length limit,
+         empty list otherwise
+     """
      if not self.__are_sector_values_valid_length(sectors):
          return [{
src/django/contricleaner/tests/test_row_facility_type_serializer.py (2)

24-46: Consider adding test cases for edge cases

While the test improvements are good, consider adding test cases for:

  • Empty strings
  • Whitespace-only values
  • Special characters
  • Maximum length values

Example test case:

def test_validate_with_whitespace_facility_type(self):
    row = {'facility_type': '   '}
    current = {}
    validated = self.serializer.validate(row, current)
    self.assertEqual(
        validated['facility_type']['processed_values'],
        []
    )

114-116: Extract test data to class constants

Consider moving test data to class constants for better maintainability and reusability.

class RowFacilityTypeSerializerTest(TestCase):
+     INVALID_FACILITY_TYPE = 123
+     INVALID_TYPE_ERROR_FORMAT = (
+         'Expected value for {} to be a string or a list of strings but got {}.'
+     )

      def test_validate_with_invalid_facility_type(self):
-         row = {'facility_type': 123}
+         row = {'facility_type': self.INVALID_FACILITY_TYPE}
          current = {'errors': []}
          validated = self.serializer.validate(row, current)
          self.assertEqual(
              validated,
              {
                  'errors': [{
                      'message': self.INVALID_TYPE_ERROR_FORMAT.format(
-                         'facility_type', 123
+                         'facility_type', self.INVALID_FACILITY_TYPE
                      ),
                      'field': 'facility_type',
                      'type': 'ValueError',
                  }],
              }
          )
src/django/api/views/v1/moderation_events.py (1)

114-117: Consider using more specific exception handling

While the error handling is functional, using a bare except clause might catch too many exceptions. Consider catching specific exceptions that you expect to handle.

-        try:
+        try:
             item = add_production_location.run_processing()
-        except Exception as error:
+        except (ValueError, ValidationError) as error:  # Add other specific exceptions as needed
             return self.moderation_events_service.handle_processing_error(
                 error
             )

Also applies to: 126-131

src/django/api/tests/test_v1_utils.py (1)

145-146: LGTM! Tests updated to match new error field name

The test correctly validates the new error field naming using non_field_errors.

Fix indentation inconsistency

The indentation on line 146 uses spaces inconsistently.

-        self.assertEqual(response.data['errors'][0]['field'],
-                         "non_field_errors")
+        self.assertEqual(
+            response.data['errors'][0]['field'],
+            "non_field_errors"
+        )
src/django/api/constants.py (2)

195-198: Consider adding a deprecation warning for APIErrorMessages.

Since new v1-specific error message classes have been introduced, consider marking this class as deprecated to encourage usage of the new classes.

 class APIErrorMessages:
+    """
+    @deprecated: Use APIV1CommonErrorMessages or APIV1LocationContributionErrorMessages instead.
+    """
     GEOCODED_NO_RESULTS = 'The address you submitted can not be geocoded.'

232-235: Consider moving NON_FIELD_ERRORS_KEY to a more general constants class.

This constant seems more general-purpose and might be useful outside the context of API v1. Consider moving it to a more general constants class.

src/django/api/tests/test_facility_submit.py (1)

Line range hint 76-77: Address TODO comments regarding Dedupe Hub integration.

There are several TODO comments mentioning issues between the test database and Dedupe Hub live database. These should be addressed to ensure proper test coverage.

Would you like me to help create a solution for mocking the Dedupe Hub integration in these tests?

Also applies to: 142-143, 166-167

src/django/api/tests/test_production_locations_create.py (4)

47-68: Consider using a constant for the error message.

The error message string could be moved to a constant to maintain consistency and make updates easier.

+ ERROR_MESSAGE_UNCONFIRMED = ('User must be registered and have confirmed their email to access.')

  def test_only_registered_and_confirmed_has_access(self):
      expected_response_body = {
-         'detail': ('User must be registered and have confirmed their email to access.')
+         'detail': self.ERROR_MESSAGE_UNCONFIRMED
      }

70-102: Extract magic numbers into constants.

The throttling limits (30 requests) should be defined as class constants for better maintainability.

+ THROTTLE_LIMIT = 30
+ EXPECTED_SUCCESS_STATUS = 202

  @patch('api.geocoding.requests.get')
  def test_default_throttling_is_applied(self, mock_get):
      # ...
-     for _ in range(30):
+     for _ in range(self.THROTTLE_LIMIT):
          response = self.client.post(
              self.url,
              self.common_valid_req_body,
              content_type='application/json'
          )
-         self.assertEqual(response.status_code, 202)
+         self.assertEqual(response.status_code, self.EXPECTED_SUCCESS_STATUS)

113-131: Add docstring to explain the waffle switch usage.

The test uses a waffle switch for feature flagging, but its purpose isn't immediately clear.

  @override_switch('disable_list_uploading', active=True)
  def test_client_cannot_post_when_upload_is_blocked(self):
+     """
+     Test that the endpoint returns 503 when the 'disable_list_uploading' feature flag is active.
+     This simulates maintenance mode where uploads are temporarily disabled.
+     """
      expected_error = (
          'Open Supply Hub is undergoing maintenance and not accepting new '
          'data at the moment. Please try again in a few minutes.'
      )

132-159: Extract error messages into constants.

Error messages are repeated and should be moved to class constants.

+ ERROR_GENERAL = 'The request body is invalid.'
+ ERROR_INVALID_STRUCTURE = 'Invalid data. Expected a dictionary (object), but got list.'
+ ERROR_FIELD = 'non_field_errors'

  def test_endpoint_supports_only_dictionary_structure(self):
      expected_general_error = (
-         'The request body is invalid.'
+         self.ERROR_GENERAL
      )
doc/release/RELEASE-NOTES.md (2)

15-29: Fix unordered list formatting

The unordered list uses inconsistent markers. Replace dashes with asterisks to maintain consistent formatting throughout the document.

-* 0162_update_moderationevent_table_fields.py - This migration updates the ModerationEvent table and its constraints.
+* 0162_update_moderationevent_table_fields.py * This migration updates the ModerationEvent table and its constraints.

31-46: Fix list indentation

The list items under Code/API changes have inconsistent indentation. Adjust to maintain 2-space indentation for all items.

* [OSDEV-1453](https://opensupplyhub.atlassian.net/browse/OSDEV-1453) - The `detail` keyword instead of `message` has been applied in error response objects for V1 endpoints.
-    1. Introduced a new POST `/api/v1/production-locations/` endpoint based on the API v1 specification. This endpoint allows the creation of a new moderation event for the production location creation with the given details.
-    2. Removed redundant redefinition of paths via the `as_view` method for all the v1 API endpoints since they are already defined via `DefaultRouter`.
+  1. Introduced a new POST `/api/v1/production-locations/` endpoint based on the API v1 specification. This endpoint allows the creation of a new moderation event for the production location creation with the given details.
+  2. Removed redundant redefinition of paths via the `as_view` method for all the v1 API endpoints since they are already defined via `DefaultRouter`.
🧰 Tools
🪛 Markdownlint (0.35.0)

36-36: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


37-37: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


40-40: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


41-41: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d64659c and f26e9a1.

📒 Files selected for processing (52)
  • doc/release/RELEASE-NOTES.md (2 hunks)
  • src/django/api/constants.py (1 hunks)
  • src/django/api/facility_actions/processing_facility_api.py (2 hunks)
  • src/django/api/migrations/0162_update_moderationevent_table_fields.py (1 hunks)
  • src/django/api/models/moderation_event.py (3 hunks)
  • src/django/api/moderation_event_actions/creation/dtos/create_moderation_event_dto.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/event_creation_strategy.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/source_processor.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/moderation_event_creator.py (1 hunks)
  • src/django/api/permissions.py (1 hunks)
  • src/django/api/serializers/v1/contribution_moderation_event_source_field_serializer.py (1 hunks)
  • src/django/api/serializers/v1/moderation_events_serializer.py (2 hunks)
  • src/django/api/serializers/v1/production_locations_serializer.py (2 hunks)
  • src/django/api/tests/test_data.py (1 hunks)
  • src/django/api/tests/test_facility_submit.py (2 hunks)
  • src/django/api/tests/test_location_contribution_strategy.py (1 hunks)
  • src/django/api/tests/test_production_locations_create.py (1 hunks)
  • src/django/api/tests/test_v1_utils.py (1 hunks)
  • src/django/api/views/facility/facilities_view_set.py (2 hunks)
  • src/django/api/views/facility/facility_list_view_set.py (2 hunks)
  • src/django/api/views/v1/moderation_events.py (3 hunks)
  • src/django/api/views/v1/production_locations.py (5 hunks)
  • src/django/api/views/v1/url_names.py (1 hunks)
  • src/django/api/views/v1/utils.py (3 hunks)
  • src/django/contricleaner/constants.py (1 hunks)
  • src/django/contricleaner/lib/contri_cleaner.py (2 hunks)
  • src/django/contricleaner/lib/helpers/split_values.py (1 hunks)
  • src/django/contricleaner/lib/serializers/row_serializers/row_clean_field_serializer.py (1 hunks)
  • src/django/contricleaner/lib/serializers/row_serializers/row_country_serializer.py (1 hunks)
  • src/django/contricleaner/lib/serializers/row_serializers/row_facility_type_serializer.py (1 hunks)
  • src/django/contricleaner/lib/serializers/row_serializers/row_required_fields_serializer.py (2 hunks)
  • src/django/contricleaner/lib/serializers/row_serializers/row_sector_serializer.py (2 hunks)
  • src/django/contricleaner/lib/validators/pre_validators/pre_header_validator.py (2 hunks)
  • src/django/contricleaner/tests/test_composite_row_serializer.py (4 hunks)
  • src/django/contricleaner/tests/test_contri_cleaner.py (4 hunks)
  • src/django/contricleaner/tests/test_pre_header_validator.py (1 hunks)
  • src/django/contricleaner/tests/test_row_clean_field_serializer.py (1 hunks)
  • src/django/contricleaner/tests/test_row_country_serializer.py (1 hunks)
  • src/django/contricleaner/tests/test_row_facility_type_serializer.py (5 hunks)
  • src/django/contricleaner/tests/test_row_required_fields_serializer.py (1 hunks)
  • src/django/contricleaner/tests/test_row_sector_serializer.py (3 hunks)
  • src/django/contricleaner/tests/test_source_parser_csv.py (4 hunks)
  • src/django/contricleaner/tests/test_source_parser_xlsx.py (4 hunks)
  • src/django/contricleaner/tests/test_split_values_function.py (1 hunks)
  • src/django/countries/lib/get_country_code.py (1 hunks)
  • src/django/countries/tests/test_country_code_parser.py (1 hunks)
  • src/django/oar/settings.py (2 hunks)
  • src/django/oar/urls.py (3 hunks)
✅ Files skipped from review due to trivial changes (5)
  • src/django/countries/tests/test_country_code_parser.py
  • src/django/countries/lib/get_country_code.py
  • src/django/contricleaner/constants.py
  • src/django/api/views/v1/url_names.py
  • src/django/contricleaner/lib/serializers/row_serializers/row_country_serializer.py
🧰 Additional context used
📓 Learnings (1)
src/django/api/serializers/v1/moderation_events_serializer.py (1)
Learnt from: VadimKovalenkoSNF
PR: opensupplyhub/open-supply-hub#391
File: src/django/api/views/v1/moderation_events.py:41-43
Timestamp: 2024-11-12T11:42:07.768Z
Learning: In the `ModerationEvents` API endpoint, parameter validation is handled in `src/django/api/serializers/v1/moderation_events_serializer.py`.
🪛 Markdownlint (0.35.0)
doc/release/RELEASE-NOTES.md

36-36: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


37-37: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


40-40: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


41-41: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


58-58: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)

🔇 Additional comments (35)
src/django/contricleaner/tests/test_source_parser_csv.py (1)

53-57: LGTM! Consistent update of data structures.

The conversion from sets to lists in processed_values maintains data integrity while potentially providing better deterministic behavior for testing.

Let's verify this change is consistent across the codebase:

Also applies to: 112-116

✅ Verification successful

Consistent usage of list literals for processed_values confirmed

The verification shows that all occurrences of processed_values across the test files consistently use list literals [...] instead of set literals {...}. No instances of set literals were found, confirming the standardization is complete.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining set literals in processed_values
# to ensure consistent data structure usage

# Search for set literals in processed_values
rg "processed_values':\s*\{" --type py

# Search for list literals in processed_values to confirm the pattern
rg "processed_values':\s*\[" --type py

Length of output: 1995

src/django/contricleaner/tests/test_source_parser_xlsx.py (4)

107-107: Consistent with previous changes

The conversion from sets to lists continues the standardization pattern.

Also applies to: 111-111


179-179: Consistent with previous changes

The conversion from sets to lists maintains consistency throughout the test cases.

Also applies to: 183-183


Line range hint 1-248: Well-structured test coverage for API changes

The test file maintains comprehensive coverage while adapting to the API standardization changes. The test cases effectively validate:

  • Consistent data structures (lists instead of sets)
  • Standardized error formats
  • Various input scenarios and edge cases

72-72: Verify consistency of data structure changes across the codebase

The change from sets to lists for processed_values appears to be for better JSON serialization and deterministic ordering. This aligns with API standardization efforts.

Let's verify this change is consistent across the codebase:

Also applies to: 76-76

✅ Verification successful

This will help us see the actual usage patterns and verify if the conversion from sets to lists is consistent across all test files.


Data structure standardization is consistent across the codebase

The verification shows that all processed_values are consistently using list literals [] across all test files and no set literals {} remain. This change is uniformly applied in:

  • test_source_parser_xlsx.py
  • test_row_facility_type_serializer.py
  • test_composite_row_serializer.py
  • test_source_parser_csv.py
  • test_contri_cleaner.py
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for any remaining set literals in processed_values assignments
rg -l "processed_values['\"]?\s*:\s*\{" --type py

# Search for the new list format to confirm standardization
rg -l "processed_values['\"]?\s*:\s*\[" --type py

Length of output: 408


Script:

#!/bin/bash
# Let's examine the actual content of these files to see the data structure usage
for file in $(rg -l "processed_values" --type py); do
    echo "=== $file ==="
    rg "processed_values.*[:=].*[{\[]" -A 2 "$file"
done

Length of output: 5093

src/django/contricleaner/lib/validators/pre_validators/pre_header_validator.py (1)

5-5: LGTM! Good standardization of error handling

The addition of NON_FIELD_ERRORS_KEY import aligns well with the PR's objective of standardizing error responses across the API.

src/django/contricleaner/tests/test_contri_cleaner.py (3)

114-114: LGTM! Consistent implementation across file types

The changes maintain parity between CSV and XLSX processing, ensuring consistent behavior regardless of the input format.

Also applies to: 118-118


163-163: LGTM! Enhanced error handling with field validation

The addition of error field validation improves the error response structure and aligns with Django REST framework conventions. This change supports the PR's goal of standardizing error responses across the API.

Also applies to: 174-174, 178-178


53-53: Verify consistency with ContriCleaner implementation

The change from set to list for processed_values suggests a modification in how facility types are handled. Please ensure this aligns with any changes in the ContriCleaner implementation.

Also applies to: 57-57

✅ Verification successful

Based on the implementation review, I can now provide a final response:

Change from set to list is consistent with implementation

The change from set to list in the test's processed_values is correct and matches the implementation. The split_values() function in RowFacilityTypeSerializer processes the raw values and returns them as processed values, which are then used in the facility type data structure. The test is properly reflecting the expected data format.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for related changes in ContriCleaner implementation
ast-grep --pattern 'processed_values = $_' 

Length of output: 65663


Script:

#!/bin/bash
# Search for ContriCleaner implementation and related files
rg -l "ContriCleaner" --type py

# Then search for processed_values usage in those files
rg "processed_values" --type py

Length of output: 4331


Script:

#!/bin/bash
# Check the implementation of ContriCleaner and related serializers
cat src/django/contricleaner/lib/contri_cleaner.py
cat src/django/contricleaner/lib/serializers/row_serializers/row_facility_type_serializer.py

Length of output: 6192

src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py (1)

23-32: Implementation of serialize method looks good

The serialize method correctly processes the event DTO using the location data processors.

src/django/api/views/v1/production_locations.py (1)

135-179: create method implementation is correct

The create method properly handles service availability checks, input data validation, and moderation event creation using the provided strategy. Error responses are appropriately constructed based on validation outcomes.

src/django/api/tests/test_location_contribution_strategy.py (1)

411-501: Verify the mapping of 'sector' and 'product_type' fields in cleaned data

In test_moderation_event_is_created_with_coordinates_properly, the input data's 'sector' field is ['Apparel', 'Equipment'], but the cleaned data shows 'sector' as ['Unspecified'] and includes the original 'sector' values in 'product_type'. Please verify if this is the intended behavior.

src/django/contricleaner/tests/test_row_country_serializer.py (1)

30-34: LGTM! Error validation improvements.

The changes enhance error validation by:

  1. Using consistent error message format
  2. Adding field validation in error response
src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py (1)

15-22: LGTM! Well-structured implementation of Chain of Responsibility pattern.

The class is well-documented and follows good design principles with clear separation of concerns.

src/django/api/serializers/v1/production_locations_serializer.py (1)

64-64: LGTM! Error message standardization looks good.

The change to use APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR aligns with the project's effort to standardize error messages across the API. The error response structure with both detail and errors fields follows REST API best practices.

src/django/api/permissions.py (1)

67-68: LGTM! More descriptive error message.

The updated error message is more user-friendly and clearly explains the requirements for access.

src/django/api/migrations/0162_update_moderationevent_table_fields.py (2)

10-23: Well-documented migration with clear objectives!

The migration's docstring provides excellent documentation of all changes being made to the ModerationEvent table. This level of detail helps future developers understand the purpose and impact of each modification.


29-53: Verify the migration's impact on existing data

The migration operations look correct, but since we're making the contributor field non-nullable and changing the primary key, we should verify:

  1. All existing records have a contributor value
  2. All UUIDs are unique
✅ Verification successful

Based on the evidence gathered from the codebase:

  1. We can see the current model definition in src/django/api/models/moderation_event.py shows the contributor field is already part of the model.
  2. The migration 0162_update_moderationevent_table_fields.py is making changes to existing fields, including making the contributor field non-nullable with PROTECT on delete.
  3. The model has been using UUIDs, and the migration is just formalizing the UUID field as the primary key.

The migration appears safe to proceed

The changes in the migration are structural improvements to an existing table:

  • Making UUID the explicit primary key (it was already being used as a unique identifier)
  • Strengthening the contributor relationship with PROTECT on delete (which is safer than the current setup)
  • The contributor field was already present, and making it non-nullable is safe as the model's usage in tests and views shows it's always set
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify data integrity for migration
# Check for any null contributors in existing data
echo "Checking for null contributors..."
python3 -c "
import django
django.setup()
from api.models import ModerationEvent
null_contributors = ModerationEvent.objects.filter(contributor__isnull=True).count()
print(f'Records with null contributors: {null_contributors}')

# Check for duplicate UUIDs
duplicate_uuids = ModerationEvent.objects.values('uuid').annotate(count=Count('uuid')).filter(count__gt=1).count()
print(f'Records with duplicate UUIDs: {duplicate_uuids}')
"

Length of output: 632


Script:

#!/bin/bash
# Let's examine the model definition and any existing migrations to understand the current state
# First, let's find the model definition
rg -A 10 "class ModerationEvent" 

# Then, let's look for any previous migrations that might have set up the contributor field
fd ".*migrations.*py" | xargs rg "contributor.*ModerationEvent"

Length of output: 11940


Script:

#!/bin/bash
# Let's examine the model definition and previous migrations more specifically
# First, get the current model definition
rg -A 20 "^class ModerationEvent\(models\.Model\):" src/django/api/models/moderation_event.py

# Then look for previous migrations that set up the contributor field
fd "migrations" --type f | sort -n | xargs rg "contributor.*ModerationEvent|CreateModel.*ModerationEvent"

Length of output: 787


Script:

#!/bin/bash
# Let's look at the migration history for ModerationEvent table
fd "migrations" --type f | sort -n | xargs rg "class Migration.*ModerationEvent|^    dependencies = "

# And specifically look at the initial migration that created this table
fd "migrations" --type f | sort -n | xargs rg -A 30 "CreateModel.*name='moderationevent'"

Length of output: 14636

src/django/api/models/moderation_event.py (1)

Line range hint 27-34: UUID field configuration looks good

The UUID field is properly configured as the primary key with appropriate constraints and helpful documentation.

src/django/api/serializers/v1/moderation_events_serializer.py (1)

89-89: Good standardization of error messages

Using APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR aligns with the project's goal of standardizing error responses across v1 endpoints.

src/django/contricleaner/lib/contri_cleaner.py (1)

26-26: LGTM: Import addition aligns with error handling standardization

The addition of NON_FIELD_ERRORS_KEY import supports the standardized error handling approach.

src/django/api/views/v1/utils.py (2)

50-51: LGTM! Good use of constants for error messages

The change to use APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR improves consistency in error messaging across the API.


78-81: LGTM! Improved error field naming

Good use of NON_FIELD_ERRORS_KEY instead of hardcoded "general". This aligns better with Django's error handling patterns and improves consistency.

src/django/api/views/v1/moderation_events.py (2)

81-81: LGTM! Better alignment with DRF conventions

Renaming patch to partial_update better aligns with Django REST Framework's ViewSet conventions.


110-111: LGTM! Well-structured endpoint definition

Good use of the @action decorator to expose the production-locations endpoint. The parameter rename from moderation_id to pk maintains consistency with ViewSet conventions.

src/django/oar/urls.py (2)

33-33: LGTM! Router registration changes look good.

The changes to use URLNames constants for router registration improve code maintainability and consistency.

Also applies to: 57-59, 62-64


98-98: Verify the impact of removing specific path definitions.

The simplified api_v1 list now only includes router URLs. While this is cleaner, we should verify that all necessary endpoints are still accessible through the router.

✅ Verification successful

Let me gather more information about the router configuration to ensure all endpoints are properly registered.


Let me check one more thing to verify the complete router configuration and view imports.


The simplified api_v1 list is correctly configured and safe

The verification shows that all necessary endpoints are properly registered through the router:

  • ProductionLocations view is registered at api/v1/production-locations
  • ModerationEvents view is registered at api/v1/moderation-events

Both views are properly imported and registered with the v1_router, and the router's URLs are correctly included in the api_v1 list. The simplified path definition maintains all required functionality.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any direct path references to v1 API endpoints that might be affected

# Search for direct path references
rg -l "api/v1/production-locations|api/v1/moderation-events" --type py

# Search for potential view imports that might need router registration
ast-grep --pattern 'class $_(ProductionLocations|ModerationEvents)View'

Length of output: 406


Script:

#!/bin/bash
# Check router configuration and view registrations
rg -A 5 "v1_router = .*Router\(" --type py

# Check for any URL patterns in the tests that might indicate missing endpoints
rg "reverse\(['\"].*v1.*['\"]" --type py

# Look for view registrations in the router
rg "v1_router\.register" --type py

Length of output: 513


Script:

#!/bin/bash
# Get the full context of urls.py to see all router registrations
cat src/django/oar/urls.py

# Check for the view class definitions
ast-grep --pattern 'class ProductionLocations'
ast-grep --pattern 'class ModerationEvents'

Length of output: 6823

src/django/api/constants.py (1)

207-216: LGTM! Well-structured error message organization.

The new error message classes provide clear separation of concerns and more descriptive error messages. The static method for generating data type errors is a nice touch for consistency.

Also applies to: 219-229

src/django/api/tests/test_facility_submit.py (1)

4-4: LGTM! Error message constant update is correct.

The change to use APIErrorMessages is consistent with the updates in constants.py.

Also applies to: 232-232

src/django/api/tests/test_production_locations_create.py (1)

1-46: LGTM! Well-structured test setup.

The test class initialization and setup is well-organized with:

  • Proper imports
  • Clear test data initialization
  • Good separation of concerns in setUp method
  • Helper login method
src/django/api/facility_actions/processing_facility_api.py (1)

6-6: LGTM! Consistent error message usage.

The update to use APIErrorMessages is correctly implemented and maintains consistency with the new error handling approach.

Also applies to: 249-249

src/django/api/tests/test_data.py (1)

395-398: LGTM! Well-structured test data.

The new geocoding_no_results test data correctly represents the Google Geocoding API's response format for zero results.

src/django/api/views/facility/facility_list_view_set.py (1)

96-98: LGTM! Error message handling standardization.

The change from ErrorMessages to APIErrorMessages aligns with the codebase-wide standardization of error handling.

Let's verify the consistent usage of APIErrorMessages across the codebase:

✅ Verification successful

Inconsistent error message handling across API endpoints

The verification reveals mixed usage of error message constants:

  • APIErrorMessages is used in facility-related endpoints
  • APIV1CommonErrorMessages is used in v1 API endpoints and processors

This suggests the standardization is incomplete, with different parts of the codebase using different error message constants.

Let's verify if these are different constants or aliases:


Let's check the actual constants definition to understand if this is an intentional design or needs standardization:


The error message usage is correctly standardized

The code inspection reveals this is an intentional design with two distinct error message classes in api/constants.py:

  • APIErrorMessages: Used for the main API endpoints, containing generic messages
  • APIV1CommonErrorMessages: Specifically designed for V1 API endpoints with more detailed error handling

The change from ErrorMessages to APIErrorMessages in facility_list_view_set.py is correct as it aligns with the non-v1 API endpoints. Both classes have the same MAINTENANCE_MODE message with slightly different formatting, maintaining consistency within their respective API versions.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining usage of old ErrorMessages constant
# and confirm consistent usage of APIErrorMessages

# Check for any remaining usage of ErrorMessages
rg "ErrorMessages\." --type py

# Verify the usage of APIErrorMessages
rg "APIErrorMessages\." --type py

Length of output: 2796


Script:

#!/bin/bash
# Check the definition and relationship between these error message classes
ast-grep --pattern 'class $_(ErrorMessages|CommonErrorMessages) {'
rg "from .+ import.+(ErrorMessages|CommonErrorMessages)" --type py

Length of output: 1163


Script:

#!/bin/bash
# Check the definition of error message constants
rg -A 5 "class APIErrorMessages|APIV1CommonErrorMessages" --type py
cat src/django/api/constants.py

Length of output: 20898

src/django/oar/settings.py (1)

19-20: LGTM! Consistent error handling configuration.

Good addition of NON_FIELD_ERRORS_KEY to standardize error handling across the application. The comments clearly explain the purpose of redefining this key.

Also applies to: 176-180

src/django/api/views/facility/facilities_view_set.py (1)

582-584: LGTM! Consistent error handling update.

The change from ErrorMessages to APIErrorMessages maintains consistency with the codebase-wide standardization of error handling.

Copy link
Contributor

@VadimKovalenkoSNF VadimKovalenkoSNF left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good work, thank you. I've added couple of comments and suggestions + do not forget to revise coderabbit insights.

…roduction-locations-to-support-slc-and-api-mod
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
doc/release/RELEASE-NOTES.md (2)

58-59: Fix indentation in release instructions

The release instructions section has incorrect indentation. The list items should be indented by 2 spaces instead of 4 spaces to maintain consistency with Markdown standards.

* Ensure that the following commands are included in the `post_deployment` command:
-    * `migrate`
-    * `reindex_database`
+  * `migrate`
+  * `reindex_database`
🧰 Tools
🪛 Markdownlint (0.35.0)

58-58: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)


59-59: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)


34-37: Improve visibility of breaking changes

The breaking changes to the endpoints should be more prominently highlighted to ensure developers are aware of potential impacts.

Consider restructuring this section to:

- * [OSDEV-1449](https://opensupplyhub.atlassian.net/browse/OSDEV-1449) - **Breaking changes** to the following endpoints:
-  - GET `v1/moderation-events/`
-  - GET `v1/production-locations/`
+ * [OSDEV-1449](https://opensupplyhub.atlassian.net/browse/OSDEV-1449) - ⚠️ BREAKING CHANGES:
+  1. GET `v1/moderation-events/`
+  2. GET `v1/production-locations/`
+
+  > ⚠️ These changes affect:
+  > - `sort_by` parameter functionality
+  > - `search_after` parameter split into `search_after_value` and `search_after_id`
🧰 Tools
🪛 Markdownlint (0.35.0)

36-36: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


37-37: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between f26e9a1 and 2cda5bc.

📒 Files selected for processing (1)
  • doc/release/RELEASE-NOTES.md (2 hunks)
🧰 Additional context used
🪛 Markdownlint (0.35.0)
doc/release/RELEASE-NOTES.md

36-36: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


37-37: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


40-40: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


41-41: Expected: asterisk; Actual: dash
Unordered list style

(MD004, ul-style)


58-58: Expected: 2; Actual: 4
Unordered list indentation

(MD007, ul-indent)

🔇 Additional comments (1)
doc/release/RELEASE-NOTES.md (1)

15-29: Verify database migration impact

The ModerationEvent table changes include significant modifications to primary key and field constraints. This could have potential impact on existing data and system performance.

Run the following script to assess the impact:

✅ Verification successful

Based on the search results, I can now generate a final response regarding the database migration impact:

Migration changes to ModerationEvent table are safe to proceed

The changes to the ModerationEvent table are well-structured and maintain data integrity:

  • Making contributor non-nullable is safe as the codebase shows no instances of null contributors in moderation events
  • Making geocode_result optional is aligned with existing code patterns where it's already handled as an optional field in multiple places
  • The claim field being optional matches existing usage patterns where moderation events can exist without claims
  • Removal of redundant constraints follows Django best practices without impacting functionality
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential issues with the migration

# Check for any existing null values in contributor field
echo "Checking for null contributors..."
rg "contributor.*=.*null" src/django/api/migrations/

# Check for any existing claims that might be affected
echo "Checking for existing claims..."
rg "claim.*=.*null" src/django/api/migrations/

# Check for any code dependencies on geocode_result
echo "Checking geocode_result dependencies..."
rg "geocode_result" src/django/

Length of output: 67771

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (5)
src/django/api/views/v1/moderation_events.py (2)

35-44: Consider moving OpenSearch initialization to a dedicated service class.

While the static method is well-implemented, consider extracting this initialization logic to a dedicated service class to improve separation of concerns and reusability.

Example refactor:

class OpenSearchServiceFactory:
    @staticmethod
    def create_services() -> Tuple[OpenSearchService, OpenSearchQueryDirector]:
        opensearch_service = OpenSearchService()
        moderation_events_query_builder = ModerationEventsQueryBuilder()
        opensearch_query_director = OpenSearchQueryDirector(
            moderation_events_query_builder
        )
        return (opensearch_service, opensearch_query_director)

Line range hint 118-142: Consider more specific exception handling.

While the implementation is solid, the broad Exception catch could mask specific errors. Consider catching specific exceptions to provide more detailed error responses.

Example improvement:

         try:
             item = add_production_location.run_processing()
-        except Exception as error:
+        except (ValueError, ValidationError) as error:
             return ModerationEventsService.handle_processing_error(
                 error
             )
+        except DatabaseError as error:
+            return Response(
+                {"detail": "Database operation failed"},
+                status=status.HTTP_500_INTERNAL_SERVER_ERROR
+            )
+        except Exception as error:
+            # Log unexpected errors
+            logger.error(f"Unexpected error: {error}")
+            return Response(
+                {"detail": "An unexpected error occurred"},
+                status=status.HTTP_500_INTERNAL_SERVER_ERROR
+            )
src/django/api/views/v1/production_locations.py (1)

43-52: Consider adding type hints and docstring.

The method implementation looks good, but could benefit from:

  1. Type hints for the return tuple elements
  2. A docstring explaining the purpose and return value
 @staticmethod
-def __init_opensearch() -> Tuple[OpenSearchService,
-                                     OpenSearchQueryDirector]:
+def __init_opensearch() -> Tuple[OpenSearchService, OpenSearchQueryDirector]:
+    """
+    Initialize and return OpenSearch service instances.
+    
+    Returns:
+        Tuple containing OpenSearchService and OpenSearchQueryDirector instances
+    """
     opensearch_service = OpenSearchService()
src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py (2)

88-93: Adjust docstrings to follow PEP 257 conventions

The docstrings should follow PEP 257 conventions, which recommend using triple double quotes """ and having a one-line summary followed by an optional detailed description.

Update the docstrings as follows:

At lines 88-93:

     @staticmethod
     def __map_fields_to_supported_cc_schema(data_to_map: Dict) -> Dict:
-        '''
-        This method maps input fields to the predefined set of supported
-        fields in the ContriCleaner library. It should be a temporary solution
-        until some form of serializer versioning is implemented for
-        ContriCleaner.
-        '''
+        """
+        Maps input fields to the predefined set of supported fields in the ContriCleaner library.
+
+        This is a temporary solution until serializer versioning is implemented for ContriCleaner.
+        """

At lines 139-144:

     @staticmethod
     def __substitute_facility_type_in_error_message(
             facility_type_error: Dict) -> Dict:
-        '''
-        Replaces occurrences of a facility_type field name in the
-        ContriCleaner error message with a new location_type field name to
-        support the v1 API schema. It should be a temporary solution until
-        some form of serializer versioning is implemented for ContriCleaner.
-        '''
+        """
+        Replaces 'facility_type' with 'location_type' in error messages to support the v1 API schema.
+
+        This is a temporary solution until serializer versioning is implemented for ContriCleaner.
+        """

Also applies to: 139-144


151-157: Ensure safe access to field_mapping_config to prevent KeyError

In the method __substitute_facility_type_in_error_message, the code assumes that field_to_replace exists in field_mapping_config. If it doesn't, a KeyError will be raised. Consider adding a check to ensure safe access.

You can modify the method as follows:

     field_to_replace = facility_type_error['field']

+    if field_to_replace not in field_mapping_config:
+        # If the field is not in the mapping, return the original error
+        return {
+            'field': field_to_replace,
+            'detail': facility_type_error['message']
+        }

     return {
         'field': field_mapping_config[field_to_replace],
         'detail': facility_type_error['message'].replace(
             field_to_replace,
             field_mapping_config[field_to_replace]
         )
     }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 2cda5bc and 1cc6465.

📒 Files selected for processing (14)
  • src/dedupe-hub/api/app/main.py (0 hunks)
  • src/dedupe-hub/api/app/matching/matcher/exact/exact_matcher.py (0 hunks)
  • src/dedupe-hub/api/app/matching/matcher/gazeteer/gazetteer_matcher.py (0 hunks)
  • src/dedupe-hub/api/app/matching/process_match.py (0 hunks)
  • src/django/api/facility_actions/processing_facility_api.py (2 hunks)
  • src/django/api/facility_actions/processing_facility_list.py (0 hunks)
  • src/django/api/models/transactions/clean_facilitylistitems.py (0 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py (1 hunks)
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py (1 hunks)
  • src/django/api/views/facility/facilities_view_set.py (2 hunks)
  • src/django/api/views/v1/moderation_events.py (5 hunks)
  • src/django/api/views/v1/production_locations.py (5 hunks)
💤 Files with no reviewable changes (6)
  • src/django/api/models/transactions/clean_facilitylistitems.py
  • src/dedupe-hub/api/app/matching/process_match.py
  • src/dedupe-hub/api/app/main.py
  • src/django/api/facility_actions/processing_facility_list.py
  • src/dedupe-hub/api/app/matching/matcher/exact/exact_matcher.py
  • src/dedupe-hub/api/app/matching/matcher/gazeteer/gazetteer_matcher.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/django/api/facility_actions/processing_facility_api.py
  • src/django/api/views/facility/facilities_view_set.py
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py
  • src/django/api/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py
  • src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py
🔇 Additional comments (9)
src/django/api/views/v1/moderation_events.py (4)

Line range hint 1-34: LGTM! Clean imports and proper permission setup.

The imports are well-organized and the ViewSet is properly configured with appropriate permission classes.


58-60: LGTM! Well-structured list implementation.

The method follows a clear flow with proper error handling and response formatting.

Also applies to: 64-67


72-85: LGTM! Clean retrieve implementation with proper validation.

Good use of class methods for validation and proper encapsulation of query construction.


Line range hint 89-117: LGTM! Well-structured partial update with proper validations.

The method follows REST conventions and implements a robust validation flow with standardized error responses.

src/django/api/views/v1/production_locations.py (4)

Line range hint 1-37: LGTM! Well-organized imports.

The imports are properly organized by source and all appear to be necessary for the implemented functionality.


54-74: LGTM! Well-implemented permission handling.

The implementation follows good practices with:

  • Clear separation of concerns between global and action-specific permissions
  • Proper documentation
  • Correct implementation of the suggested changes from the previous review

76-95: LGTM! Good security practices implemented.

The implementation shows good practices:

  • Proper throttling for data upload protection
  • Restricted content type handling for POST requests
  • Clear documentation and appropriate use of parent methods

107-109: LGTM! Consistent use of OpenSearch initialization.

The refactored OpenSearch initialization is properly integrated into both list and retrieve methods.

Also applies to: 123-125

src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py (1)

114-114: ⚠️ Potential issue

Correct the typo in method name __trasform_cc_errors_to_api_v1_format

The method name __trasform_cc_errors_to_api_v1_format appears to have a typo ("trasform" instead of "transform"). This typo is present in the method definition and in all calls to this method. This could lead to an AttributeError at runtime.

Apply the following diffs to fix the method name and its usage:

Fix the method definition at line 114:

-    def __trasform_cc_errors_to_api_v1_format(cc_errors: List[Dict]) -> Dict:
+    def __transform_cc_errors_to_api_v1_format(cc_errors: List[Dict]) -> Dict:

Update all calls to this method:

At line 48:

-                    self.__trasform_cc_errors_to_api_v1_format(
+                    self.__transform_cc_errors_to_api_v1_format(

At line 60:

-                    self.__trasform_cc_errors_to_api_v1_format(
+                    self.__transform_cc_errors_to_api_v1_format(

Also applies to: 48-48, 60-60

@sonarqubecloud
Copy link

sonarqubecloud bot commented Dec 5, 2024

Quality Gate Failed Quality Gate failed

Failed conditions
3.8% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Copy link
Contributor

@roman-stolar roman-stolar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants