[OSDEV-1158] API. Create POST endpoint for v1/production-locations to support SLC & API Mod.#407
Conversation
React App | Jest test suite - Code coverage reportTotal: 27.74%Your code coverage diff: 0.00% ▴ ✅ All code changes are covered |
Dedupe Hub App | Unittest test suite - Code coverage reportTotal: 56.14%Your code coverage diff: 0.00% ▴ ✅ All code changes are covered |
Contricleaner App | Unittest test suite - Code coverage reportTotal: 98.91%Your code coverage diff: 0.00% ▴ ✅ All code changes are covered |
Countries App | Unittest test suite - Code coverage reportTotal: 100%Your code coverage diff: 0.00% ▴ ✅ All code changes are covered |
Django App | Unittest test suite - Code coverage reportTotal: 79.98%Your code coverage diff: 0.42% ▴ ✅ All code changes are covered |
e5f546d to
9c46a51
Compare
…roduction-locations-to-support-slc-and-api-mod
|
Warning Rate limit exceeded@vladsha-dev has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 3 minutes and 33 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 Walkthrough📝 WalkthroughWalkthroughThis pull request includes significant updates to the Open Supply Hub project, particularly focusing on the release version 1.26.0. Key changes involve database migrations and schema adjustments for the Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (52)
src/django/contricleaner/tests/test_row_required_fields_serializer.py (1)
59-65: LGTM! Good enhancement to error reporting.The addition of the 'field' key to error dictionaries improves error traceability and aligns with REST API best practices for structured error responses. This change supports the PR's goal of standardizing error handling across the API.
Consider documenting these error response structures in your API documentation to help API consumers handle errors effectively.
src/django/contricleaner/tests/test_pre_header_validator.py (1)
Line range hint
11-45: Consider renaming test method for better clarity.The current method name
test_validate_headeris quite generic. Consider a more descriptive name that indicates what scenarios are being tested, such astest_validate_header_required_fields_validation.src/django/contricleaner/tests/test_row_sector_serializer.py (3)
86-86: LGTM! Consider documenting the error format.The addition of the 'field' attribute to error messages improves error reporting clarity. This change aligns well with REST API best practices by providing more detailed error context.
Consider adding a comment block documenting the expected error message format:
# Error message format: # { # 'message': str, # Human-readable error description # 'field': str, # Field that caused the validation error # 'type': str # Type of error (ValueError, ValidationError, etc.) # }Also applies to: 92-92, 98-98
121-121: LGTM! Consider extracting the max product types limit.The error message format is consistent with other validation errors. However, the maximum limit of 50 product types appears to be a magic number used in multiple places.
Consider extracting this limit to a constant:
MAX_PRODUCT_TYPES = 50 # Then use it in both the test and error message: self.assertEqual( result['errors'], [{ 'message': f'You may submit a maximum of {MAX_PRODUCT_TYPES} product types, not {product_type_count}.', 'field': 'product_type', 'type': 'ValidationError', }], )
146-146: LGTM! Consider improving test data readability.The error message format is consistent with other validation errors. However, the test data could be more readable.
Consider improving the test data readability:
# Extract the long sector name to a constant LONG_SECTOR_NAME = 'Agriculture' * 10 # Creates a 100-char string row = { 'sector': [LONG_SECTOR_NAME] }src/django/contricleaner/tests/test_source_parser_csv.py (1)
162-162: LGTM! Enhanced error validation coverage.The addition of error field validation improves the test's robustness. Consider extracting 'non_field_errors' to a constant for better maintainability.
Consider applying this change:
+ NON_FIELD_ERRORS = 'non_field_errors' ... - expected_error_field = 'non_field_errors' + expected_error_field = NON_FIELD_ERRORSAlso applies to: 176-176, 180-180
src/django/contricleaner/tests/test_source_parser_xlsx.py (1)
121-128: Improved error message structure and clarityThe error message changes improve the API response by:
- Adding field context through the 'field' key
- Providing more specific validation feedback
- Following a consistent error structure
Consider updating the API documentation to reflect this standardized error format.
src/django/contricleaner/lib/validators/pre_validators/pre_header_validator.py (2)
9-11: Fix inconsistent indentation in the required fields setThe continuation line is over-indented. It should align with the opening parenthesis of the previous line.
__required_fields = {'name', - 'address', - 'country'} + 'address', + 'country'}
22-25: Consider using f-strings for better readabilityWhile the error message formatting is correct, using an f-string would make it more readable.
- 'message': 'Required Fields are missing: {}.' - .format(', '.join(self.__required_fields)), + 'message': f'Required Fields are missing: {", ".join(self.__required_fields)}.',src/django/contricleaner/tests/test_contri_cleaner.py (1)
Line range hint
1-211: Consider test structure improvementsWhile the test coverage is comprehensive, consider these improvements:
- Move test data to fixtures for better maintainability
- Add edge cases for special characters in facility names and addresses
- Consider parameterized tests to reduce code duplication between CSV and XLSX tests
src/django/contricleaner/tests/test_composite_row_serializer.py (2)
46-46: Add test coverage for multiple facility typesWhile the current test case verifies basic serialization, we've lost coverage for the split pattern handling (
r', |,|\|') with multiple facility types. Consider adding another test method to verify this functionality.Example test case:
def test_get_serialized_row_multiple_facility_types(self): facility_source = { 'facility_type': 'Blending|Knitting, Dyeing' } serialized_row = self.composite_row_serializer.validate(facility_source) self.assertEqual( serialized_row['fields']['facility_type']['processed_values'], ['Blending', 'Knitting', 'Dyeing'] )Also applies to: 56-57, 60-61
56-61: Consider documenting the relationship between facility_type and processing_typeThe test reveals that
processing_typealways mirrorsfacility_typevalues. This tight coupling should be documented, and we should consider whether this duplication is intentional or if these fields serve different purposes.Consider:
- Adding a comment explaining the relationship between these fields
- Evaluating if these fields should be merged if they always contain the same values
- Documenting any scenarios where these fields might diverge
Also applies to: 106-111
src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py (1)
34-48: Avoid double underscore prefix to prevent name manglingThe method
__setup_location_data_processorsis using double underscores, which triggers name mangling in Python. If the intent is to make it a non-public method accessible within subclasses, consider using a single underscore_setup_location_data_processorsinstead.Apply this diff to adjust the method name:
- def __setup_location_data_processors() -> ContributionProcessor: + def _setup_location_data_processors() -> ContributionProcessor:And update the method call in
serialize:- entry_location_data_processor = self.__setup_location_data_processors() + entry_location_data_processor = self._setup_location_data_processors()src/django/api/views/v1/production_locations.py (1)
Line range hint
122-128: Standardize error message capitalizationIn the error response, consider capitalizing "ID" for consistency with common conventions.
Apply this diff:
- "detail": "The location with the given id was not found.", + "detail": "The location with the given ID was not found.",src/django/api/tests/test_location_contribution_strategy.py (1)
85-176: Separate test cases into individual methods for clarityThe
test_invalid_source_value_cannot_be_acceptedmethod contains multiple test scenarios within a single method. For better readability and isolation of test cases, consider splitting them into separate methods.src/django/api/moderation_event_actions/creation/dtos/create_moderation_event_dto.py (2)
9-19: Add type hints and documentation for all fieldsThe class structure is clean, but could benefit from additional documentation and type hints:
- Add docstring explaining the purpose of the DTO
- Document each field's purpose and constraints
- Consider adding validation for
request_typeusing EnumExample enhancement:
@dataclass class CreateModerationEventDTO: + """Data Transfer Object for creating moderation events. + + Contains all necessary data for creating and tracking a moderation event, + including both required and optional fields. + """ contributor_id: int + """ID of the contributor initiating the moderation event.""" raw_data: Dict + """Raw input data before processing.""" request_type: str # Consider using Enum + """Type of request being processed."""
14-18: Consider adding validation for the Dict fieldsThe Dict fields (
cleaned_data,geocode_result,errors) could benefit from more specific type hints to ensure data consistency.Consider using TypedDict or adding runtime validation:
from typing import TypedDict class GeocodeResult(TypedDict, total=False): lat: float lng: float formatted_address: str class CleanedData(TypedDict, total=False): # Add expected fields here passsrc/django/api/moderation_event_actions/creation/event_creation_strategy.py (1)
7-12: Enhance documentation for the strategy interfaceWhile the abstract class is well-structured, it would benefit from more detailed documentation:
- Add class-level docstring explaining the strategy pattern usage
- Document expected behavior and error handling requirements
Example enhancement:
class EventCreationStrategy(ABC): + """Abstract base class defining the interface for moderation event creation strategies. + + Implementations of this strategy are responsible for serializing and validating + moderation event data before creation. Each strategy can implement custom + validation and processing logic while maintaining a consistent interface. + """ @abstractmethod def serialize( self, event_dto: CreateModerationEventDTO) -> CreateModerationEventDTO: + """Process and validate the event data before creation. + + Args: + event_dto: The DTO containing the event data to be processed. + + Returns: + The processed DTO with validated and transformed data. + + Raises: + ValidationError: If the event data is invalid. + """ passsrc/django/contricleaner/lib/helpers/split_values.py (1)
5-6: Consider performance optimization for large datasetsThe current implementation performs multiple conversions (str->set->list) which could be inefficient for large datasets.
Consider maintaining the list structure throughout:
-def split_values(value: Union[str, list, set], - split_pattern: str) -> List[str]: +def split_values( + value: Union[str, list, set], + split_pattern: str, + *, + remove_duplicates: bool = True +) -> List[str]: # ... - return list(set(re.split(split_pattern, value))) + result = re.split(split_pattern, value) + return list(set(result)) if remove_duplicates else resultAlso applies to: 16-16
src/django/contricleaner/lib/serializers/row_serializers/row_clean_field_serializer.py (1)
15-22: LGTM! Consider standardizing error message format.The error message structure and content improvements are good. The addition of the 'field' key helps with error identification.
Consider using a constant for the error message template to maintain consistency across the codebase:
- 'message': ( - '{} cannot consist solely of ' - 'punctuation or whitespace.' - ).format(self.field), + 'message': ERROR_MESSAGES['INVALID_FIELD_CONTENT'].format( + field=self.field + ),src/django/api/serializers/v1/contribution_moderation_event_source_field_serializer.py (1)
18-32: Consider using f-strings consistently and improve error message structure.The validation logic is solid, but the error message construction could be improved.
Consider this refactor:
- stringified_source_values = ', '.join(source_values) - raise ValidationError( - (f'The source value should be one of the following: ' - f'{stringified_source_values}.') - ) + raise ValidationError({ + 'source': f'Invalid source. Allowed values are: {", ".join(source_values)}' + })This change:
- Uses consistent string formatting
- Follows DRF's error message structure
- Makes the error message more concise
src/django/contricleaner/tests/test_row_clean_field_serializer.py (1)
24-28: Consider adding more test cases for comprehensive validation.While the current tests cover the basic scenarios, consider adding tests for:
- Different types of whitespace characters
- Pure punctuation input
- Mixed whitespace and punctuation
Example:
def test_validate_punctuation_only(self): punctuation_row = {"field": "!@#$%"} result = self.serializer.validate(punctuation_row, self.current.copy()) self.assertEqual(len(result["errors"]), 1) self.assertEqual( result["errors"][0]["message"], "field cannot consist solely of punctuation or whitespace." ) def test_validate_mixed_whitespace_punctuation(self): mixed_row = {"field": " ! @ "} result = self.serializer.validate(mixed_row, self.current.copy()) self.assertEqual(len(result["errors"]), 1)src/django/api/moderation_event_actions/creation/moderation_event_creator.py (2)
9-12: Add docstring to describe the class purpose and strategy pattern usage.The class implements the strategy pattern for moderation event creation, which should be documented for better maintainability.
class ModerationEventCreator: + """ + Creates moderation events using a provided creation strategy. + + This class implements the strategy pattern to allow different creation strategies + while maintaining a consistent interface for event creation. + """ def __init__(self, strategy: EventCreationStrategy) -> None: self.__strategy = strategy
13-31: Consider improving error handling and transaction management.The method could benefit from:
- Better error handling for database operations
- Transaction management to ensure atomicity
- Method documentation
def perform_event_creation( self, event_dto: CreateModerationEventDTO ) -> CreateModerationEventDTO: + """ + Creates a moderation event using the configured strategy. + + Args: + event_dto: Data transfer object containing event creation details + + Returns: + The input DTO updated with the created moderation event or errors + """ + from django.db import transaction + processed_event = self.__strategy.serialize(event_dto) if processed_event.errors: return event_dto - event_dto.moderation_event = ModerationEvent.objects.create( - contributor=processed_event.contributor_id, - request_type=processed_event.request_type, - raw_data=processed_event.raw_data, - cleaned_data=processed_event.cleaned_data, - geocode_result=processed_event.geocode_result, - source=processed_event.source - ) + try: + with transaction.atomic(): + event_dto.moderation_event = ModerationEvent.objects.create( + contributor=processed_event.contributor_id, + request_type=processed_event.request_type, + raw_data=processed_event.raw_data, + cleaned_data=processed_event.cleaned_data, + geocode_result=processed_event.geocode_result, + source=processed_event.source + ) + except Exception as e: + event_dto.errors.append({ + "message": f"Failed to create moderation event: {str(e)}", + "type": "Error" + })src/django/contricleaner/tests/test_split_values_function.py (1)
10-10: Consider adding edge cases while keeping the improved assertions.The change to
assertCountEqualis appropriate for unordered collections. Consider adding tests for edge cases:
- Empty strings/lists/sets
- None values
- Invalid regex patterns
- Unicode characters
Example additional test:
def test_edge_cases(self): # Empty inputs self.assertCountEqual(split_values("", ","), []) self.assertCountEqual(split_values([], ","), []) self.assertCountEqual(split_values(set(), ","), []) # Unicode characters self.assertCountEqual( split_values("café,résumé", ","), ["café", "résumé"] )Also applies to: 15-15, 20-20, 28-38
src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py (1)
36-41: Consider enhancing error logging with more context.The error logging could be more helpful for debugging by including the event_dto details (excluding sensitive data).
- log.error( - ('[API V1 Location Upload] Internal Moderation Event Creation ' - 'Error: The non-existing next handler for processing the ' - 'moderation event creation related to location contribution was ' - 'called in the last handler of the chain.') - ) + log.error( + ('[API V1 Location Upload] Internal Moderation Event Creation ' + 'Error: The non-existing next handler for processing the ' + 'moderation event creation related to location contribution was ' + 'called in the last handler of the chain. ' + 'Event DTO status: %s'), event_dto.status_code)src/django/api/moderation_event_actions/creation/location_contribution/processors/source_processor.py (2)
16-33: Consider adding debug logging for validation failures.While the error handling is good, adding debug logging would help with troubleshooting validation issues in production.
if not serializer.is_valid(): + log.debug( + '[API V1 Location Upload] Source validation failed: %s', + serializer.errors + ) transformed_errors = self.__transform_serializer_errors( serializer.errors )
35-51: Add type hints for dictionary values.The type hint for the dictionary could be more specific to better document the expected structure.
- def __transform_serializer_errors(serializer_errors: Dict) -> Dict: + def __transform_serializer_errors( + serializer_errors: Dict[str, List[str]] + ) -> Dict[str, Union[str, List[Dict[str, str]]]]:src/django/contricleaner/lib/serializers/row_serializers/row_required_fields_serializer.py (3)
15-15: Extract magic number 200 into a constant.The maximum field length of 200 should be defined as a class constant for better maintainability.
+ __MAX_FIELD_LENGTH = 200 - __valid_field_value_lengths = {'name': 200, 'address': 200} + __valid_field_value_lengths = { + 'name': __MAX_FIELD_LENGTH, + 'address': __MAX_FIELD_LENGTH + }
Line range hint
39-39: Fix indentation inconsistency.The method's indentation doesn't match the class's style (8 spaces instead of 4).
- def __validate_field_value_lengths(self, - missing_fields: Set[str], - input_row: Dict) -> List[Dict]: + def __validate_field_value_lengths( + self, + missing_fields: Set[str], + input_row: Dict) -> List[Dict]:
Line range hint
47-55: Consider using f-strings for better readability.The string formatting could be simplified using f-strings.
- 'message': ('There is a problem with the {0}: ' - 'Ensure this value has at most 200 ' - 'characters. (it has {1})').format( - field, value_len), + 'message': ( + f'There is a problem with the {field}: ' + f'Ensure this value has at most {max_length} ' + f'characters. (it has {value_len})' + ),src/django/contricleaner/lib/serializers/row_serializers/row_facility_type_serializer.py (2)
41-45: Error message structure improvement looks good, but consider refactoring the validation logic.While the error message formatting changes are good, the validation method could benefit from some refactoring to improve maintainability.
Consider extracting the validation logic into smaller, focused methods:
class RowFacilityTypeSerializer(RowSerializer): + def _validate_field(self, field: str, value: any) -> dict: + if value and not is_valid_type(value): + return { + 'message': f'Expected value for {field} to be a string or a list of strings but got {value}.', + 'field': field, + 'type': 'ValueError', + } + return None def validate(self, row: dict, current: dict) -> dict: facility_type = row.get('facility_type') processing_type = row.get('processing_type') facility_type_processing_type = row.get('facility_type_processing_type') if not any((facility_type, processing_type, facility_type_processing_type)): return current facility_type_errors = [] fields = ['facility_type', 'processing_type', 'facility_type_processing_type'] values = [facility_type, processing_type, facility_type_processing_type] for field, value in zip(fields, values): - if value and not is_valid_type(value): - facility_type_errors.append({ - 'message': 'Expected value for {} to be a string or a list of strings but got {}.'.format(field, value), - 'field': field, - 'type': 'ValueError', - }) + error = self._validate_field(field, value) + if error: + facility_type_errors.append(error)
50-50: Consider using list comprehension for error collection.The error collection could be more Pythonic.
- facility_type_errors = [] - for field, value in zip(fields, values): - if value and not is_valid_type(value): - facility_type_errors.append(...) - if facility_type_errors: - current['errors'].extend(facility_type_errors) + errors = [self._validate_field(f, v) for f, v in zip(fields, values)] + errors = [e for e in errors if e is not None] + if errors: + current['errors'].extend(errors)src/django/api/models/moderation_event.py (1)
Line range hint
103-108: Consider adding validation for geocode_resultWhile making geocode_result optional is valid, consider adding validation to ensure that when it's blank, latitude and longitude are indeed provided in the cleaned_data.
geocode_result = models.JSONField( blank=True, default=dict, help_text=( 'Result of the geocode operation.' ) ) + +def clean(self): + super().clean() + if not self.geocode_result: + if not (self.cleaned_data.get('lat') and self.cleaned_data.get('lng')): + raise ValidationError( + 'Either geocode_result or lat/lng must be provided' + )src/django/api/serializers/v1/moderation_events_serializer.py (1)
88-92: Consider adding error loggingThe comment
[OSDEV-1441]suggests adding error logging to Rollbar. Consider implementing this now to avoid technical debt.if errors: - # [OSDEV-1441] Pass error msg to the Rollbar here + logger.error( + "Validation errors in ModerationEventsSerializer", + extra={"errors": errors} + ) raise ValidationError({ "detail": APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR, "errors": errors })src/django/contricleaner/lib/contri_cleaner.py (1)
58-60: Consider adding type hints to error dictionaryWhile the error handling enhancement is good, consider adding type hints to improve code maintainability.
- return ListDTO(errors=[{ + from typing import TypedDict + + class ParsingErrorDict(TypedDict): + message: str + field: str + type: str + + return ListDTO(errors=[ParsingErrorDict( 'message': str(err), 'field': NON_FIELD_ERRORS_KEY, 'type': 'ParsingError', - }]) + )])src/django/contricleaner/lib/serializers/row_serializers/row_sector_serializer.py (3)
37-42: Extract error message format to a constantConsider extracting the error message format string to a class constant for better maintainability.
+ __INVALID_TYPE_ERROR_FORMAT = 'Expected value for {} to be a string or a list of strings but got {}.' def validate(self, row: dict, current: dict) -> dict: # ... sector_errors.append({ - 'message': 'Expected value for {} to be a string or a list of strings but got {}.'.format( + 'message': self.__INVALID_TYPE_ERROR_FORMAT.format( field, value ), 'field': field, 'type': 'ValueError', })
57-64: Consider creating an ErrorBuilder utility classThe error dictionary structure is repeated across methods. Consider creating a utility class to standardize error creation.
+ class ValidationErrorBuilder: + @staticmethod + def build(message: str, field: str, error_type: str) -> dict: + return { + 'message': message, + 'field': field, + 'type': error_type + } # Usage: current['errors'].append( - { - 'message': 'You may submit a maximum of {} product types, not {}.'.format( - MAX_PRODUCT_TYPE_COUNT, len(product_types) - ), - 'field': 'product_type', - 'type': 'ValidationError', - } + ValidationErrorBuilder.build( + message=f'You may submit a maximum of {MAX_PRODUCT_TYPE_COUNT} product types, not {len(product_types)}.', + field='product_type', + error_type='ValidationError' + ) )
109-111: Add docstring to private validation methodThe
__validate_sector_value_lengthsmethod would benefit from a docstring explaining its purpose and return value.def __validate_sector_value_lengths(self, sectors: List) -> List: + """ + Validates that all sector values are within the maximum length limit. + + Args: + sectors: List of sector values to validate + + Returns: + List of validation errors if any sectors exceed the length limit, + empty list otherwise + """ if not self.__are_sector_values_valid_length(sectors): return [{src/django/contricleaner/tests/test_row_facility_type_serializer.py (2)
24-46: Consider adding test cases for edge casesWhile the test improvements are good, consider adding test cases for:
- Empty strings
- Whitespace-only values
- Special characters
- Maximum length values
Example test case:
def test_validate_with_whitespace_facility_type(self): row = {'facility_type': ' '} current = {} validated = self.serializer.validate(row, current) self.assertEqual( validated['facility_type']['processed_values'], [] )
114-116: Extract test data to class constantsConsider moving test data to class constants for better maintainability and reusability.
class RowFacilityTypeSerializerTest(TestCase): + INVALID_FACILITY_TYPE = 123 + INVALID_TYPE_ERROR_FORMAT = ( + 'Expected value for {} to be a string or a list of strings but got {}.' + ) def test_validate_with_invalid_facility_type(self): - row = {'facility_type': 123} + row = {'facility_type': self.INVALID_FACILITY_TYPE} current = {'errors': []} validated = self.serializer.validate(row, current) self.assertEqual( validated, { 'errors': [{ 'message': self.INVALID_TYPE_ERROR_FORMAT.format( - 'facility_type', 123 + 'facility_type', self.INVALID_FACILITY_TYPE ), 'field': 'facility_type', 'type': 'ValueError', }], } )src/django/api/views/v1/moderation_events.py (1)
114-117: Consider using more specific exception handlingWhile the error handling is functional, using a bare
exceptclause might catch too many exceptions. Consider catching specific exceptions that you expect to handle.- try: + try: item = add_production_location.run_processing() - except Exception as error: + except (ValueError, ValidationError) as error: # Add other specific exceptions as needed return self.moderation_events_service.handle_processing_error( error )Also applies to: 126-131
src/django/api/tests/test_v1_utils.py (1)
145-146: LGTM! Tests updated to match new error field nameThe test correctly validates the new error field naming using
non_field_errors.Fix indentation inconsistency
The indentation on line 146 uses spaces inconsistently.
- self.assertEqual(response.data['errors'][0]['field'], - "non_field_errors") + self.assertEqual( + response.data['errors'][0]['field'], + "non_field_errors" + )src/django/api/constants.py (2)
195-198: Consider adding a deprecation warning for APIErrorMessages.Since new v1-specific error message classes have been introduced, consider marking this class as deprecated to encourage usage of the new classes.
class APIErrorMessages: + """ + @deprecated: Use APIV1CommonErrorMessages or APIV1LocationContributionErrorMessages instead. + """ GEOCODED_NO_RESULTS = 'The address you submitted can not be geocoded.'
232-235: Consider moving NON_FIELD_ERRORS_KEY to a more general constants class.This constant seems more general-purpose and might be useful outside the context of API v1. Consider moving it to a more general constants class.
src/django/api/tests/test_facility_submit.py (1)
Line range hint
76-77: Address TODO comments regarding Dedupe Hub integration.There are several TODO comments mentioning issues between the test database and Dedupe Hub live database. These should be addressed to ensure proper test coverage.
Would you like me to help create a solution for mocking the Dedupe Hub integration in these tests?
Also applies to: 142-143, 166-167
src/django/api/tests/test_production_locations_create.py (4)
47-68: Consider using a constant for the error message.The error message string could be moved to a constant to maintain consistency and make updates easier.
+ ERROR_MESSAGE_UNCONFIRMED = ('User must be registered and have confirmed their email to access.') def test_only_registered_and_confirmed_has_access(self): expected_response_body = { - 'detail': ('User must be registered and have confirmed their email to access.') + 'detail': self.ERROR_MESSAGE_UNCONFIRMED }
70-102: Extract magic numbers into constants.The throttling limits (30 requests) should be defined as class constants for better maintainability.
+ THROTTLE_LIMIT = 30 + EXPECTED_SUCCESS_STATUS = 202 @patch('api.geocoding.requests.get') def test_default_throttling_is_applied(self, mock_get): # ... - for _ in range(30): + for _ in range(self.THROTTLE_LIMIT): response = self.client.post( self.url, self.common_valid_req_body, content_type='application/json' ) - self.assertEqual(response.status_code, 202) + self.assertEqual(response.status_code, self.EXPECTED_SUCCESS_STATUS)
113-131: Add docstring to explain the waffle switch usage.The test uses a waffle switch for feature flagging, but its purpose isn't immediately clear.
@override_switch('disable_list_uploading', active=True) def test_client_cannot_post_when_upload_is_blocked(self): + """ + Test that the endpoint returns 503 when the 'disable_list_uploading' feature flag is active. + This simulates maintenance mode where uploads are temporarily disabled. + """ expected_error = ( 'Open Supply Hub is undergoing maintenance and not accepting new ' 'data at the moment. Please try again in a few minutes.' )
132-159: Extract error messages into constants.Error messages are repeated and should be moved to class constants.
+ ERROR_GENERAL = 'The request body is invalid.' + ERROR_INVALID_STRUCTURE = 'Invalid data. Expected a dictionary (object), but got list.' + ERROR_FIELD = 'non_field_errors' def test_endpoint_supports_only_dictionary_structure(self): expected_general_error = ( - 'The request body is invalid.' + self.ERROR_GENERAL )doc/release/RELEASE-NOTES.md (2)
15-29: Fix unordered list formattingThe unordered list uses inconsistent markers. Replace dashes with asterisks to maintain consistent formatting throughout the document.
-* 0162_update_moderationevent_table_fields.py - This migration updates the ModerationEvent table and its constraints. +* 0162_update_moderationevent_table_fields.py * This migration updates the ModerationEvent table and its constraints.
31-46: Fix list indentationThe list items under Code/API changes have inconsistent indentation. Adjust to maintain 2-space indentation for all items.
* [OSDEV-1453](https://opensupplyhub.atlassian.net/browse/OSDEV-1453) - The `detail` keyword instead of `message` has been applied in error response objects for V1 endpoints. - 1. Introduced a new POST `/api/v1/production-locations/` endpoint based on the API v1 specification. This endpoint allows the creation of a new moderation event for the production location creation with the given details. - 2. Removed redundant redefinition of paths via the `as_view` method for all the v1 API endpoints since they are already defined via `DefaultRouter`. + 1. Introduced a new POST `/api/v1/production-locations/` endpoint based on the API v1 specification. This endpoint allows the creation of a new moderation event for the production location creation with the given details. + 2. Removed redundant redefinition of paths via the `as_view` method for all the v1 API endpoints since they are already defined via `DefaultRouter`.🧰 Tools
🪛 Markdownlint (0.35.0)
36-36: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
37-37: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
40-40: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
41-41: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (52)
doc/release/RELEASE-NOTES.md(2 hunks)src/django/api/constants.py(1 hunks)src/django/api/facility_actions/processing_facility_api.py(2 hunks)src/django/api/migrations/0162_update_moderationevent_table_fields.py(1 hunks)src/django/api/models/moderation_event.py(3 hunks)src/django/api/moderation_event_actions/creation/dtos/create_moderation_event_dto.py(1 hunks)src/django/api/moderation_event_actions/creation/event_creation_strategy.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/processors/source_processor.py(1 hunks)src/django/api/moderation_event_actions/creation/moderation_event_creator.py(1 hunks)src/django/api/permissions.py(1 hunks)src/django/api/serializers/v1/contribution_moderation_event_source_field_serializer.py(1 hunks)src/django/api/serializers/v1/moderation_events_serializer.py(2 hunks)src/django/api/serializers/v1/production_locations_serializer.py(2 hunks)src/django/api/tests/test_data.py(1 hunks)src/django/api/tests/test_facility_submit.py(2 hunks)src/django/api/tests/test_location_contribution_strategy.py(1 hunks)src/django/api/tests/test_production_locations_create.py(1 hunks)src/django/api/tests/test_v1_utils.py(1 hunks)src/django/api/views/facility/facilities_view_set.py(2 hunks)src/django/api/views/facility/facility_list_view_set.py(2 hunks)src/django/api/views/v1/moderation_events.py(3 hunks)src/django/api/views/v1/production_locations.py(5 hunks)src/django/api/views/v1/url_names.py(1 hunks)src/django/api/views/v1/utils.py(3 hunks)src/django/contricleaner/constants.py(1 hunks)src/django/contricleaner/lib/contri_cleaner.py(2 hunks)src/django/contricleaner/lib/helpers/split_values.py(1 hunks)src/django/contricleaner/lib/serializers/row_serializers/row_clean_field_serializer.py(1 hunks)src/django/contricleaner/lib/serializers/row_serializers/row_country_serializer.py(1 hunks)src/django/contricleaner/lib/serializers/row_serializers/row_facility_type_serializer.py(1 hunks)src/django/contricleaner/lib/serializers/row_serializers/row_required_fields_serializer.py(2 hunks)src/django/contricleaner/lib/serializers/row_serializers/row_sector_serializer.py(2 hunks)src/django/contricleaner/lib/validators/pre_validators/pre_header_validator.py(2 hunks)src/django/contricleaner/tests/test_composite_row_serializer.py(4 hunks)src/django/contricleaner/tests/test_contri_cleaner.py(4 hunks)src/django/contricleaner/tests/test_pre_header_validator.py(1 hunks)src/django/contricleaner/tests/test_row_clean_field_serializer.py(1 hunks)src/django/contricleaner/tests/test_row_country_serializer.py(1 hunks)src/django/contricleaner/tests/test_row_facility_type_serializer.py(5 hunks)src/django/contricleaner/tests/test_row_required_fields_serializer.py(1 hunks)src/django/contricleaner/tests/test_row_sector_serializer.py(3 hunks)src/django/contricleaner/tests/test_source_parser_csv.py(4 hunks)src/django/contricleaner/tests/test_source_parser_xlsx.py(4 hunks)src/django/contricleaner/tests/test_split_values_function.py(1 hunks)src/django/countries/lib/get_country_code.py(1 hunks)src/django/countries/tests/test_country_code_parser.py(1 hunks)src/django/oar/settings.py(2 hunks)src/django/oar/urls.py(3 hunks)
✅ Files skipped from review due to trivial changes (5)
- src/django/countries/tests/test_country_code_parser.py
- src/django/countries/lib/get_country_code.py
- src/django/contricleaner/constants.py
- src/django/api/views/v1/url_names.py
- src/django/contricleaner/lib/serializers/row_serializers/row_country_serializer.py
🧰 Additional context used
📓 Learnings (1)
src/django/api/serializers/v1/moderation_events_serializer.py (1)
Learnt from: VadimKovalenkoSNF
PR: opensupplyhub/open-supply-hub#391
File: src/django/api/views/v1/moderation_events.py:41-43
Timestamp: 2024-11-12T11:42:07.768Z
Learning: In the `ModerationEvents` API endpoint, parameter validation is handled in `src/django/api/serializers/v1/moderation_events_serializer.py`.
🪛 Markdownlint (0.35.0)
doc/release/RELEASE-NOTES.md
36-36: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
37-37: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
40-40: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
41-41: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
58-58: Expected: 2; Actual: 4
Unordered list indentation
(MD007, ul-indent)
🔇 Additional comments (35)
src/django/contricleaner/tests/test_source_parser_csv.py (1)
53-57: LGTM! Consistent update of data structures.
The conversion from sets to lists in processed_values maintains data integrity while potentially providing better deterministic behavior for testing.
Let's verify this change is consistent across the codebase:
Also applies to: 112-116
✅ Verification successful
Consistent usage of list literals for processed_values confirmed
The verification shows that all occurrences of processed_values across the test files consistently use list literals [...] instead of set literals {...}. No instances of set literals were found, confirming the standardization is complete.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining set literals in processed_values
# to ensure consistent data structure usage
# Search for set literals in processed_values
rg "processed_values':\s*\{" --type py
# Search for list literals in processed_values to confirm the pattern
rg "processed_values':\s*\[" --type py
Length of output: 1995
src/django/contricleaner/tests/test_source_parser_xlsx.py (4)
107-107: Consistent with previous changes
The conversion from sets to lists continues the standardization pattern.
Also applies to: 111-111
179-179: Consistent with previous changes
The conversion from sets to lists maintains consistency throughout the test cases.
Also applies to: 183-183
Line range hint 1-248: Well-structured test coverage for API changes
The test file maintains comprehensive coverage while adapting to the API standardization changes. The test cases effectively validate:
- Consistent data structures (lists instead of sets)
- Standardized error formats
- Various input scenarios and edge cases
72-72: Verify consistency of data structure changes across the codebase
The change from sets to lists for processed_values appears to be for better JSON serialization and deterministic ordering. This aligns with API standardization efforts.
Let's verify this change is consistent across the codebase:
Also applies to: 76-76
✅ Verification successful
This will help us see the actual usage patterns and verify if the conversion from sets to lists is consistent across all test files.
Data structure standardization is consistent across the codebase
The verification shows that all processed_values are consistently using list literals [] across all test files and no set literals {} remain. This change is uniformly applied in:
- test_source_parser_xlsx.py
- test_row_facility_type_serializer.py
- test_composite_row_serializer.py
- test_source_parser_csv.py
- test_contri_cleaner.py
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for any remaining set literals in processed_values assignments
rg -l "processed_values['\"]?\s*:\s*\{" --type py
# Search for the new list format to confirm standardization
rg -l "processed_values['\"]?\s*:\s*\[" --type py
Length of output: 408
Script:
#!/bin/bash
# Let's examine the actual content of these files to see the data structure usage
for file in $(rg -l "processed_values" --type py); do
echo "=== $file ==="
rg "processed_values.*[:=].*[{\[]" -A 2 "$file"
done
Length of output: 5093
src/django/contricleaner/lib/validators/pre_validators/pre_header_validator.py (1)
5-5: LGTM! Good standardization of error handling
The addition of NON_FIELD_ERRORS_KEY import aligns well with the PR's objective of standardizing error responses across the API.
src/django/contricleaner/tests/test_contri_cleaner.py (3)
114-114: LGTM! Consistent implementation across file types
The changes maintain parity between CSV and XLSX processing, ensuring consistent behavior regardless of the input format.
Also applies to: 118-118
163-163: LGTM! Enhanced error handling with field validation
The addition of error field validation improves the error response structure and aligns with Django REST framework conventions. This change supports the PR's goal of standardizing error responses across the API.
Also applies to: 174-174, 178-178
53-53: Verify consistency with ContriCleaner implementation
The change from set to list for processed_values suggests a modification in how facility types are handled. Please ensure this aligns with any changes in the ContriCleaner implementation.
Also applies to: 57-57
✅ Verification successful
Based on the implementation review, I can now provide a final response:
Change from set to list is consistent with implementation
The change from set to list in the test's processed_values is correct and matches the implementation. The split_values() function in RowFacilityTypeSerializer processes the raw values and returns them as processed values, which are then used in the facility type data structure. The test is properly reflecting the expected data format.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for related changes in ContriCleaner implementation
ast-grep --pattern 'processed_values = $_'
Length of output: 65663
Script:
#!/bin/bash
# Search for ContriCleaner implementation and related files
rg -l "ContriCleaner" --type py
# Then search for processed_values usage in those files
rg "processed_values" --type py
Length of output: 4331
Script:
#!/bin/bash
# Check the implementation of ContriCleaner and related serializers
cat src/django/contricleaner/lib/contri_cleaner.py
cat src/django/contricleaner/lib/serializers/row_serializers/row_facility_type_serializer.py
Length of output: 6192
src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py (1)
23-32: Implementation of serialize method looks good
The serialize method correctly processes the event DTO using the location data processors.
src/django/api/views/v1/production_locations.py (1)
135-179: create method implementation is correct
The create method properly handles service availability checks, input data validation, and moderation event creation using the provided strategy. Error responses are appropriately constructed based on validation outcomes.
src/django/api/tests/test_location_contribution_strategy.py (1)
411-501: Verify the mapping of 'sector' and 'product_type' fields in cleaned data
In test_moderation_event_is_created_with_coordinates_properly, the input data's 'sector' field is ['Apparel', 'Equipment'], but the cleaned data shows 'sector' as ['Unspecified'] and includes the original 'sector' values in 'product_type'. Please verify if this is the intended behavior.
src/django/contricleaner/tests/test_row_country_serializer.py (1)
30-34: LGTM! Error validation improvements.
The changes enhance error validation by:
- Using consistent error message format
- Adding field validation in error response
src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py (1)
15-22: LGTM! Well-structured implementation of Chain of Responsibility pattern.
The class is well-documented and follows good design principles with clear separation of concerns.
src/django/api/serializers/v1/production_locations_serializer.py (1)
64-64: LGTM! Error message standardization looks good.
The change to use APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR aligns with the project's effort to standardize error messages across the API. The error response structure with both detail and errors fields follows REST API best practices.
src/django/api/permissions.py (1)
67-68: LGTM! More descriptive error message.
The updated error message is more user-friendly and clearly explains the requirements for access.
src/django/api/migrations/0162_update_moderationevent_table_fields.py (2)
10-23: Well-documented migration with clear objectives!
The migration's docstring provides excellent documentation of all changes being made to the ModerationEvent table. This level of detail helps future developers understand the purpose and impact of each modification.
29-53: Verify the migration's impact on existing data
The migration operations look correct, but since we're making the contributor field non-nullable and changing the primary key, we should verify:
- All existing records have a contributor value
- All UUIDs are unique
✅ Verification successful
Based on the evidence gathered from the codebase:
- We can see the current model definition in
src/django/api/models/moderation_event.pyshows thecontributorfield is already part of the model. - The migration
0162_update_moderationevent_table_fields.pyis making changes to existing fields, including making thecontributorfield non-nullable withPROTECTon delete. - The model has been using UUIDs, and the migration is just formalizing the UUID field as the primary key.
The migration appears safe to proceed
The changes in the migration are structural improvements to an existing table:
- Making UUID the explicit primary key (it was already being used as a unique identifier)
- Strengthening the contributor relationship with
PROTECTon delete (which is safer than the current setup) - The contributor field was already present, and making it non-nullable is safe as the model's usage in tests and views shows it's always set
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify data integrity for migration
# Check for any null contributors in existing data
echo "Checking for null contributors..."
python3 -c "
import django
django.setup()
from api.models import ModerationEvent
null_contributors = ModerationEvent.objects.filter(contributor__isnull=True).count()
print(f'Records with null contributors: {null_contributors}')
# Check for duplicate UUIDs
duplicate_uuids = ModerationEvent.objects.values('uuid').annotate(count=Count('uuid')).filter(count__gt=1).count()
print(f'Records with duplicate UUIDs: {duplicate_uuids}')
"
Length of output: 632
Script:
#!/bin/bash
# Let's examine the model definition and any existing migrations to understand the current state
# First, let's find the model definition
rg -A 10 "class ModerationEvent"
# Then, let's look for any previous migrations that might have set up the contributor field
fd ".*migrations.*py" | xargs rg "contributor.*ModerationEvent"
Length of output: 11940
Script:
#!/bin/bash
# Let's examine the model definition and previous migrations more specifically
# First, get the current model definition
rg -A 20 "^class ModerationEvent\(models\.Model\):" src/django/api/models/moderation_event.py
# Then look for previous migrations that set up the contributor field
fd "migrations" --type f | sort -n | xargs rg "contributor.*ModerationEvent|CreateModel.*ModerationEvent"
Length of output: 787
Script:
#!/bin/bash
# Let's look at the migration history for ModerationEvent table
fd "migrations" --type f | sort -n | xargs rg "class Migration.*ModerationEvent|^ dependencies = "
# And specifically look at the initial migration that created this table
fd "migrations" --type f | sort -n | xargs rg -A 30 "CreateModel.*name='moderationevent'"
Length of output: 14636
src/django/api/models/moderation_event.py (1)
Line range hint 27-34: UUID field configuration looks good
The UUID field is properly configured as the primary key with appropriate constraints and helpful documentation.
src/django/api/serializers/v1/moderation_events_serializer.py (1)
89-89: Good standardization of error messages
Using APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR aligns with the project's goal of standardizing error responses across v1 endpoints.
src/django/contricleaner/lib/contri_cleaner.py (1)
26-26: LGTM: Import addition aligns with error handling standardization
The addition of NON_FIELD_ERRORS_KEY import supports the standardized error handling approach.
src/django/api/views/v1/utils.py (2)
50-51: LGTM! Good use of constants for error messages
The change to use APIV1CommonErrorMessages.COMMON_REQ_QUERY_ERROR improves consistency in error messaging across the API.
78-81: LGTM! Improved error field naming
Good use of NON_FIELD_ERRORS_KEY instead of hardcoded "general". This aligns better with Django's error handling patterns and improves consistency.
src/django/api/views/v1/moderation_events.py (2)
81-81: LGTM! Better alignment with DRF conventions
Renaming patch to partial_update better aligns with Django REST Framework's ViewSet conventions.
110-111: LGTM! Well-structured endpoint definition
Good use of the @action decorator to expose the production-locations endpoint. The parameter rename from moderation_id to pk maintains consistency with ViewSet conventions.
src/django/oar/urls.py (2)
33-33: LGTM! Router registration changes look good.
The changes to use URLNames constants for router registration improve code maintainability and consistency.
Also applies to: 57-59, 62-64
98-98: Verify the impact of removing specific path definitions.
The simplified api_v1 list now only includes router URLs. While this is cleaner, we should verify that all necessary endpoints are still accessible through the router.
✅ Verification successful
Let me gather more information about the router configuration to ensure all endpoints are properly registered.
Let me check one more thing to verify the complete router configuration and view imports.
The simplified api_v1 list is correctly configured and safe
The verification shows that all necessary endpoints are properly registered through the router:
ProductionLocationsview is registered atapi/v1/production-locationsModerationEventsview is registered atapi/v1/moderation-events
Both views are properly imported and registered with the v1_router, and the router's URLs are correctly included in the api_v1 list. The simplified path definition maintains all required functionality.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any direct path references to v1 API endpoints that might be affected
# Search for direct path references
rg -l "api/v1/production-locations|api/v1/moderation-events" --type py
# Search for potential view imports that might need router registration
ast-grep --pattern 'class $_(ProductionLocations|ModerationEvents)View'
Length of output: 406
Script:
#!/bin/bash
# Check router configuration and view registrations
rg -A 5 "v1_router = .*Router\(" --type py
# Check for any URL patterns in the tests that might indicate missing endpoints
rg "reverse\(['\"].*v1.*['\"]" --type py
# Look for view registrations in the router
rg "v1_router\.register" --type py
Length of output: 513
Script:
#!/bin/bash
# Get the full context of urls.py to see all router registrations
cat src/django/oar/urls.py
# Check for the view class definitions
ast-grep --pattern 'class ProductionLocations'
ast-grep --pattern 'class ModerationEvents'
Length of output: 6823
src/django/api/constants.py (1)
207-216: LGTM! Well-structured error message organization.
The new error message classes provide clear separation of concerns and more descriptive error messages. The static method for generating data type errors is a nice touch for consistency.
Also applies to: 219-229
src/django/api/tests/test_facility_submit.py (1)
4-4: LGTM! Error message constant update is correct.
The change to use APIErrorMessages is consistent with the updates in constants.py.
Also applies to: 232-232
src/django/api/tests/test_production_locations_create.py (1)
1-46: LGTM! Well-structured test setup.
The test class initialization and setup is well-organized with:
- Proper imports
- Clear test data initialization
- Good separation of concerns in setUp method
- Helper login method
src/django/api/facility_actions/processing_facility_api.py (1)
6-6: LGTM! Consistent error message usage.
The update to use APIErrorMessages is correctly implemented and maintains consistency with the new error handling approach.
Also applies to: 249-249
src/django/api/tests/test_data.py (1)
395-398: LGTM! Well-structured test data.
The new geocoding_no_results test data correctly represents the Google Geocoding API's response format for zero results.
src/django/api/views/facility/facility_list_view_set.py (1)
96-98: LGTM! Error message handling standardization.
The change from ErrorMessages to APIErrorMessages aligns with the codebase-wide standardization of error handling.
Let's verify the consistent usage of APIErrorMessages across the codebase:
✅ Verification successful
Inconsistent error message handling across API endpoints
The verification reveals mixed usage of error message constants:
APIErrorMessagesis used in facility-related endpointsAPIV1CommonErrorMessagesis used in v1 API endpoints and processors
This suggests the standardization is incomplete, with different parts of the codebase using different error message constants.
Let's verify if these are different constants or aliases:
Let's check the actual constants definition to understand if this is an intentional design or needs standardization:
The error message usage is correctly standardized
The code inspection reveals this is an intentional design with two distinct error message classes in api/constants.py:
APIErrorMessages: Used for the main API endpoints, containing generic messagesAPIV1CommonErrorMessages: Specifically designed for V1 API endpoints with more detailed error handling
The change from ErrorMessages to APIErrorMessages in facility_list_view_set.py is correct as it aligns with the non-v1 API endpoints. Both classes have the same MAINTENANCE_MODE message with slightly different formatting, maintaining consistency within their respective API versions.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining usage of old ErrorMessages constant
# and confirm consistent usage of APIErrorMessages
# Check for any remaining usage of ErrorMessages
rg "ErrorMessages\." --type py
# Verify the usage of APIErrorMessages
rg "APIErrorMessages\." --type py
Length of output: 2796
Script:
#!/bin/bash
# Check the definition and relationship between these error message classes
ast-grep --pattern 'class $_(ErrorMessages|CommonErrorMessages) {'
rg "from .+ import.+(ErrorMessages|CommonErrorMessages)" --type py
Length of output: 1163
Script:
#!/bin/bash
# Check the definition of error message constants
rg -A 5 "class APIErrorMessages|APIV1CommonErrorMessages" --type py
cat src/django/api/constants.py
Length of output: 20898
src/django/oar/settings.py (1)
19-20: LGTM! Consistent error handling configuration.
Good addition of NON_FIELD_ERRORS_KEY to standardize error handling across the application. The comments clearly explain the purpose of redefining this key.
Also applies to: 176-180
src/django/api/views/facility/facilities_view_set.py (1)
582-584: LGTM! Consistent error handling update.
The change from ErrorMessages to APIErrorMessages maintains consistency with the codebase-wide standardization of error handling.
src/django/contricleaner/lib/validators/pre_validators/pre_header_validator.py
Show resolved
Hide resolved
...pi/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py
Outdated
Show resolved
Hide resolved
...vent_actions/creation/location_contribution/processors/production_location_data_processor.py
Outdated
Show resolved
Hide resolved
src/django/api/serializers/v1/contribution_moderation_event_source_field_serializer.py
Show resolved
Hide resolved
VadimKovalenkoSNF
left a comment
There was a problem hiding this comment.
Good work, thank you. I've added couple of comments and suggestions + do not forget to revise coderabbit insights.
src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py
Show resolved
Hide resolved
...pi/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py
Outdated
Show resolved
Hide resolved
...vent_actions/creation/location_contribution/processors/production_location_data_processor.py
Outdated
Show resolved
Hide resolved
…roduction-locations-to-support-slc-and-api-mod
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
doc/release/RELEASE-NOTES.md (2)
58-59: Fix indentation in release instructionsThe release instructions section has incorrect indentation. The list items should be indented by 2 spaces instead of 4 spaces to maintain consistency with Markdown standards.
* Ensure that the following commands are included in the `post_deployment` command: - * `migrate` - * `reindex_database` + * `migrate` + * `reindex_database`🧰 Tools
🪛 Markdownlint (0.35.0)
58-58: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
59-59: Expected: 2; Actual: 4
Unordered list indentation(MD007, ul-indent)
34-37: Improve visibility of breaking changesThe breaking changes to the endpoints should be more prominently highlighted to ensure developers are aware of potential impacts.
Consider restructuring this section to:
- * [OSDEV-1449](https://opensupplyhub.atlassian.net/browse/OSDEV-1449) - **Breaking changes** to the following endpoints: - - GET `v1/moderation-events/` - - GET `v1/production-locations/` + * [OSDEV-1449](https://opensupplyhub.atlassian.net/browse/OSDEV-1449) - ⚠️ BREAKING CHANGES: + 1. GET `v1/moderation-events/` + 2. GET `v1/production-locations/` + + > ⚠️ These changes affect: + > - `sort_by` parameter functionality + > - `search_after` parameter split into `search_after_value` and `search_after_id`🧰 Tools
🪛 Markdownlint (0.35.0)
36-36: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
37-37: Expected: asterisk; Actual: dash
Unordered list style(MD004, ul-style)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
doc/release/RELEASE-NOTES.md(2 hunks)
🧰 Additional context used
🪛 Markdownlint (0.35.0)
doc/release/RELEASE-NOTES.md
36-36: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
37-37: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
40-40: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
41-41: Expected: asterisk; Actual: dash
Unordered list style
(MD004, ul-style)
58-58: Expected: 2; Actual: 4
Unordered list indentation
(MD007, ul-indent)
🔇 Additional comments (1)
doc/release/RELEASE-NOTES.md (1)
15-29: Verify database migration impact
The ModerationEvent table changes include significant modifications to primary key and field constraints. This could have potential impact on existing data and system performance.
Run the following script to assess the impact:
✅ Verification successful
Based on the search results, I can now generate a final response regarding the database migration impact:
Migration changes to ModerationEvent table are safe to proceed
The changes to the ModerationEvent table are well-structured and maintain data integrity:
- Making
contributornon-nullable is safe as the codebase shows no instances of null contributors in moderation events - Making
geocode_resultoptional is aligned with existing code patterns where it's already handled as an optional field in multiple places - The
claimfield being optional matches existing usage patterns where moderation events can exist without claims - Removal of redundant constraints follows Django best practices without impacting functionality
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential issues with the migration
# Check for any existing null values in contributor field
echo "Checking for null contributors..."
rg "contributor.*=.*null" src/django/api/migrations/
# Check for any existing claims that might be affected
echo "Checking for existing claims..."
rg "claim.*=.*null" src/django/api/migrations/
# Check for any code dependencies on geocode_result
echo "Checking geocode_result dependencies..."
rg "geocode_result" src/django/
Length of output: 67771
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (5)
src/django/api/views/v1/moderation_events.py (2)
35-44: Consider moving OpenSearch initialization to a dedicated service class.While the static method is well-implemented, consider extracting this initialization logic to a dedicated service class to improve separation of concerns and reusability.
Example refactor:
class OpenSearchServiceFactory: @staticmethod def create_services() -> Tuple[OpenSearchService, OpenSearchQueryDirector]: opensearch_service = OpenSearchService() moderation_events_query_builder = ModerationEventsQueryBuilder() opensearch_query_director = OpenSearchQueryDirector( moderation_events_query_builder ) return (opensearch_service, opensearch_query_director)
Line range hint
118-142: Consider more specific exception handling.While the implementation is solid, the broad
Exceptioncatch could mask specific errors. Consider catching specific exceptions to provide more detailed error responses.Example improvement:
try: item = add_production_location.run_processing() - except Exception as error: + except (ValueError, ValidationError) as error: return ModerationEventsService.handle_processing_error( error ) + except DatabaseError as error: + return Response( + {"detail": "Database operation failed"}, + status=status.HTTP_500_INTERNAL_SERVER_ERROR + ) + except Exception as error: + # Log unexpected errors + logger.error(f"Unexpected error: {error}") + return Response( + {"detail": "An unexpected error occurred"}, + status=status.HTTP_500_INTERNAL_SERVER_ERROR + )src/django/api/views/v1/production_locations.py (1)
43-52: Consider adding type hints and docstring.The method implementation looks good, but could benefit from:
- Type hints for the return tuple elements
- A docstring explaining the purpose and return value
@staticmethod -def __init_opensearch() -> Tuple[OpenSearchService, - OpenSearchQueryDirector]: +def __init_opensearch() -> Tuple[OpenSearchService, OpenSearchQueryDirector]: + """ + Initialize and return OpenSearch service instances. + + Returns: + Tuple containing OpenSearchService and OpenSearchQueryDirector instances + """ opensearch_service = OpenSearchService()src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py (2)
88-93: Adjust docstrings to follow PEP 257 conventionsThe docstrings should follow PEP 257 conventions, which recommend using triple double quotes
"""and having a one-line summary followed by an optional detailed description.Update the docstrings as follows:
At lines 88-93:
@staticmethod def __map_fields_to_supported_cc_schema(data_to_map: Dict) -> Dict: - ''' - This method maps input fields to the predefined set of supported - fields in the ContriCleaner library. It should be a temporary solution - until some form of serializer versioning is implemented for - ContriCleaner. - ''' + """ + Maps input fields to the predefined set of supported fields in the ContriCleaner library. + + This is a temporary solution until serializer versioning is implemented for ContriCleaner. + """At lines 139-144:
@staticmethod def __substitute_facility_type_in_error_message( facility_type_error: Dict) -> Dict: - ''' - Replaces occurrences of a facility_type field name in the - ContriCleaner error message with a new location_type field name to - support the v1 API schema. It should be a temporary solution until - some form of serializer versioning is implemented for ContriCleaner. - ''' + """ + Replaces 'facility_type' with 'location_type' in error messages to support the v1 API schema. + + This is a temporary solution until serializer versioning is implemented for ContriCleaner. + """Also applies to: 139-144
151-157: Ensure safe access tofield_mapping_configto preventKeyErrorIn the method
__substitute_facility_type_in_error_message, the code assumes thatfield_to_replaceexists infield_mapping_config. If it doesn't, aKeyErrorwill be raised. Consider adding a check to ensure safe access.You can modify the method as follows:
field_to_replace = facility_type_error['field'] + if field_to_replace not in field_mapping_config: + # If the field is not in the mapping, return the original error + return { + 'field': field_to_replace, + 'detail': facility_type_error['message'] + } return { 'field': field_mapping_config[field_to_replace], 'detail': facility_type_error['message'].replace( field_to_replace, field_mapping_config[field_to_replace] ) }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (14)
src/dedupe-hub/api/app/main.py(0 hunks)src/dedupe-hub/api/app/matching/matcher/exact/exact_matcher.py(0 hunks)src/dedupe-hub/api/app/matching/matcher/gazeteer/gazetteer_matcher.py(0 hunks)src/dedupe-hub/api/app/matching/process_match.py(0 hunks)src/django/api/facility_actions/processing_facility_api.py(2 hunks)src/django/api/facility_actions/processing_facility_list.py(0 hunks)src/django/api/models/transactions/clean_facilitylistitems.py(0 hunks)src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py(1 hunks)src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py(1 hunks)src/django/api/views/facility/facilities_view_set.py(2 hunks)src/django/api/views/v1/moderation_events.py(5 hunks)src/django/api/views/v1/production_locations.py(5 hunks)
💤 Files with no reviewable changes (6)
- src/django/api/models/transactions/clean_facilitylistitems.py
- src/dedupe-hub/api/app/matching/process_match.py
- src/dedupe-hub/api/app/main.py
- src/django/api/facility_actions/processing_facility_list.py
- src/dedupe-hub/api/app/matching/matcher/exact/exact_matcher.py
- src/dedupe-hub/api/app/matching/matcher/gazeteer/gazetteer_matcher.py
🚧 Files skipped from review as they are similar to previous changes (5)
- src/django/api/facility_actions/processing_facility_api.py
- src/django/api/views/facility/facilities_view_set.py
- src/django/api/moderation_event_actions/creation/location_contribution/processors/contribution_processor.py
- src/django/api/moderation_event_actions/creation/location_contribution/processors/geocoding_processor.py
- src/django/api/moderation_event_actions/creation/location_contribution/location_contribution.py
🔇 Additional comments (9)
src/django/api/views/v1/moderation_events.py (4)
Line range hint 1-34: LGTM! Clean imports and proper permission setup.
The imports are well-organized and the ViewSet is properly configured with appropriate permission classes.
58-60: LGTM! Well-structured list implementation.
The method follows a clear flow with proper error handling and response formatting.
Also applies to: 64-67
72-85: LGTM! Clean retrieve implementation with proper validation.
Good use of class methods for validation and proper encapsulation of query construction.
Line range hint 89-117: LGTM! Well-structured partial update with proper validations.
The method follows REST conventions and implements a robust validation flow with standardized error responses.
src/django/api/views/v1/production_locations.py (4)
Line range hint 1-37: LGTM! Well-organized imports.
The imports are properly organized by source and all appear to be necessary for the implemented functionality.
54-74: LGTM! Well-implemented permission handling.
The implementation follows good practices with:
- Clear separation of concerns between global and action-specific permissions
- Proper documentation
- Correct implementation of the suggested changes from the previous review
76-95: LGTM! Good security practices implemented.
The implementation shows good practices:
- Proper throttling for data upload protection
- Restricted content type handling for POST requests
- Clear documentation and appropriate use of parent methods
107-109: LGTM! Consistent use of OpenSearch initialization.
The refactored OpenSearch initialization is properly integrated into both list and retrieve methods.
Also applies to: 123-125
src/django/api/moderation_event_actions/creation/location_contribution/processors/production_location_data_processor.py (1)
114-114:
Correct the typo in method name __trasform_cc_errors_to_api_v1_format
The method name __trasform_cc_errors_to_api_v1_format appears to have a typo ("trasform" instead of "transform"). This typo is present in the method definition and in all calls to this method. This could lead to an AttributeError at runtime.
Apply the following diffs to fix the method name and its usage:
Fix the method definition at line 114:
- def __trasform_cc_errors_to_api_v1_format(cc_errors: List[Dict]) -> Dict:
+ def __transform_cc_errors_to_api_v1_format(cc_errors: List[Dict]) -> Dict:Update all calls to this method:
At line 48:
- self.__trasform_cc_errors_to_api_v1_format(
+ self.__transform_cc_errors_to_api_v1_format(At line 60:
- self.__trasform_cc_errors_to_api_v1_format(
+ self.__transform_cc_errors_to_api_v1_format(Also applies to: 48-48, 60-60
...vent_actions/creation/location_contribution/processors/production_location_data_processor.py
Show resolved
Hide resolved
...vent_actions/creation/location_contribution/processors/production_location_data_processor.py
Show resolved
Hide resolved
|

[OSDEV-1158]
/api/v1/production-locations/endpoint based on the API v1 specification. This endpoint allows the creation of a new moderation event for the production location creation with the given details.as_viewmethod for all the v1 API endpoints since they are already defined viaDefaultRouter.uuidas the primary key.geocode_resultfield optional. It can be blank if lat and lnghave been provided by user.
blank=Falseandnull=Falseconstraints, as these arethe default values for model fields in Django and do not need to be
explicitly set.
contributorfield non-nullable, as the field should not be leftempty. It is required to have information about the contributor.
claimfield to be blank. This change reflects the fact thata moderation event may not always be related to a claim, so the field can
be left empty.