Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
e63ed67
New docs page for pr review process
jdavidpeery Jun 1, 2022
eaf2e90
Added Haley's suggestions
jdavidpeery Jun 7, 2022
94d3c3b
Added step for setting up a PlantCV environemnt for the branch
jdavidpeery Jun 8, 2022
9db81d1
Added reviewer section to .github/PULL_REQUEST_TEMPLATE.md
jdavidpeery Jun 8, 2022
6f72593
Merge branch 'master' into pr-review-process-doc
HaleySchuhl Jun 8, 2022
ebd9d73
Update docs/pr_review_process.md
jdavidpeery Jun 9, 2022
ff29855
Update docs/pr_review_process.md
jdavidpeery Jun 9, 2022
a41f623
Update docs/pr_review_process.md
jdavidpeery Jun 9, 2022
defdd54
Applied Noah's suggestions from code review
jdavidpeery Jun 9, 2022
e08724b
Added Noah's last suggestion
jdavidpeery Jun 9, 2022
688f575
Merge branch 'pr-review-process-doc' of https://github.com/danforthce…
jdavidpeery Jun 9, 2022
471afe8
add default values
jdavidpeery Jun 9, 2022
03e2c4a
Merge pull request #895 from danforthcenter/pr-review-process-doc
nfahlgren Jun 9, 2022
29b892b
Merge branch 'master' into params-class-default-values-update
nfahlgren Jun 9, 2022
6021db7
Merge pull request #899 from danforthcenter/params-class-default-valu…
nfahlgren Jun 9, 2022
f619447
Remove blank lines after docstring
deepsource-autofix[bot] Jun 10, 2022
108b524
Fix overindented docstrings
nfahlgren Jun 10, 2022
f84f12a
Remove space before docstring titles
nfahlgren Jun 10, 2022
ea115f0
Remove white space before docstring titles
nfahlgren Jun 10, 2022
20a0773
Remove unused variable
nfahlgren Jun 10, 2022
7e94575
Group imports from same package
deepsource-autofix[bot] Jun 10, 2022
147ccf5
Remove blank line after docstring
nfahlgren Jun 10, 2022
98b7c7d
Merge pull request #901 from danforthcenter/deepsource-overindented-d…
nfahlgren Jun 10, 2022
2cd78f4
Merge branch 'master' into deepsource-fix-ec580f6c
nfahlgren Jun 10, 2022
46f1043
Merge branch 'master' into deepsource-fix-17abb638
nfahlgren Jun 10, 2022
f1ba776
Remove unnecessary use of comprehension
deepsource-autofix[bot] Jun 10, 2022
729ff1d
Iterate dictionary directly
deepsource-autofix[bot] Jun 10, 2022
79d1d3c
Use file context manager 'with'
nfahlgren Jun 10, 2022
6890b00
Merge pull request #904 from danforthcenter/deepsource-fix-3a38dea2
nfahlgren Jun 13, 2022
62290d3
Merge branch 'master' into deepsource-fix-19c129b8
nfahlgren Jun 13, 2022
c95c46d
Merge pull request #905 from danforthcenter/deepsource-fix-19c129b8
nfahlgren Jun 13, 2022
936e864
Merge branch 'master' into deepsource-fix-ec580f6c
nfahlgren Jun 13, 2022
6672fee
Merge pull request #900 from danforthcenter/deepsource-fix-ec580f6c
nfahlgren Jun 13, 2022
0ac5beb
Merge branch 'master' into deepsource-fix-17abb638
nfahlgren Jun 13, 2022
811e655
Merge pull request #903 from danforthcenter/deepsource-fix-17abb638
nfahlgren Jun 13, 2022
b00fcaf
Merge branch 'master' into update-4x
nfahlgren Jun 13, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,14 @@ Reference associated issue numbers. Does this pull request close any issues?

**Additional context**
Add any other context about the problem here.

**For the reviewer**
See [this page](https://plantcv.readthedocs.io/en/stable/pr_review_process/) for instructions on how to review the pull request.
- [ ] PR functionality reviewed in a Jupyter Notebook
- [ ] All tests pass
- [ ] Test coverage remains 100%
- [ ] Documentation tested
- [ ] New documentation pages added to `plantcv/mkdocs.yml`
- [ ] Changes to function input/output signatures added to `updating.md`
- [ ] Code reviewed
- [ ] PR approved
6 changes: 3 additions & 3 deletions docs/params.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,15 +25,15 @@ Attributes are accessed as plantcv.params.*attribute*.
[plantcv.morphology.find_tips](find_tips.md), [plantcv.morphology.segment_skeleton](segment_skeleton.md), [plantcv.morphology.segment_tangent_angle](segment_tangent_angle.md),
[plantcv.morphology.segment_id](segment_id.md), and every region of interest function. Default = 5.

**dpi**: Dots per inch for plotting debugging images.
**dpi**: Dots per inch for plotting debugging images. Default = 100.

**text_size**: Size of the text for labels in debugging plots created by [segment_angle](segment_angle.md), [segment_curvature](segment_curvature.md), [segment_euclidean_length](segment_euclidean_length.md),
[segment_id](segment_id.md), [segment_insertion_angle](segment_insertion_angle.md), [segment_path_length](segment_pathlength.md), and [segment_tangent_angle](segment_tangent_angle.md) from
the morphology sub-package.
the morphology sub-package. Default = 0.55.

**text_thickness**: Thickness of the text for labels in debugging plots created by [segment_angle](segment_angle.md), [segment_curvature](segment_curvature.md), [segment_euclidean_length](segment_euclidean_length.md),
[segment_id](segment_id.md), [segment_insertion_angle](segment_insertion_angle.md), [segment_path_length](segment_pathlength.md), and [segment_tangent_angle](segment_tangent_angle.md) from
the morphology sub-package.
the morphology sub-package. Default = 2.

**marker_size**: Size of markers in debugging plots created by [plantcv.transform.warp](transform_warp.md). Default = 60.

Expand Down
83 changes: 83 additions & 0 deletions docs/pr_review_process.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Pull Request Review Process

### Pull Request Author Guidelines

1. PRs (Pull Requests) should be modular and as targeted as possible.
2. PRs for new features should focus on a minimum viable product. Embellishments can be added in additional PRs.
3. PRs should not contain unrelated changes as much as possible. Open a new PR for unrelated changes.
4. Commits for PRs should be succinct but descriptive. Try to avoid trivial “update file.py” type commit messages.
5. When a PR is ready for review, apply the “ready to review” label. At this point, additional changes should only be made if necessary or as part of the review. Let the team know it’s ready so someone can volunteer to review it.

### Overview

!!! note
The steps below do not necessarily need to be followed precisely in this order. Not all PRs will require all steps and some will not require an in-depth review at all (e.g., minor documentation updates).

### Step 0: Set up a local PlantCV environment for the branch being reviewed

1. Read through the steps of the [contributing guide](CONTRIBUTING.md).
2. Email [email protected] to get added as a contributor.
3. [Install PlantCV from the source code](installation.md#installation-from-the-source-code)
4. Activate the plantcv environment, then change the current directory to the location where plantcv is installed. Then, run `git checkout <pull-request-branch>` to navigate to the branch to be reviewed.

### Step 1: Familiarize yourself with the PR

Before starting the review it is convenient to understand the type of update and/or the intended functionality.

1. Inspect the files changed.
2. Read the updated documentation. To compile the documentation locally, run: `mkdocs serve --theme readthedocs` in a local terminal with an activated PlantCV environment (Testing the documentation is done in step 4).

### Step 2: Prepare materials for the review

1. Maintain a shared team directory for reviews (data and notebooks) in the PlantCV Google Drive. Email [email protected] from a Gmail account to have the team share access to this Google Drive.
2. Find appropriate sample data (either already in the shared directory or new data), ideally not data used by the PR author.
3. Update your local development environment to use the PR branch.
4. Create a new Jupyter notebook named `pr<num>_<short description>.ipynb`

### Step 3: Review the PR functionality

Read the PR documentation and use your intuition to run relevant parts of PlantCV to test the PR code. For example, test upstream and downstream functionality to ensure the new function works well with other steps in a workflow. Try more than one data type if a function works on both RGB and grayscale. Potential findings to report to the PR author and/or to propose fixes for:

1. **Errors**: the code does not work as written or does not function as intended (on new data for example).
2. **Process**: in using the code, you think the process can be simplified or made clearer (e.g., fewer inputs, renamed arguments, etc.).
3. **Documentation**: both the documentation and docstrings should match the code in terms of syntax and process.

Iterate as needed.

### Step 4: Review the tests

1. Do all tests pass?
1. Running relevant tests locally: `py.test --cov=plantcv -k test_name_or_keywords` in a local terminal with an activated PlantCV environment
2. Does coverage remain at 100%?
3. Do any new or modified tests accurately test the PR code?

Propose updates as needed.

### Step 5: Test the documentation

Reviewing the documentation code is important, but compiling the documentation locally ensures that the resulting pages are rendered correctly. The easiest way to view the documentation live on your local machine is to run: `mkdocs serve --theme readthedocs` in a local terminal with an activated PlantCV environment

Some common oversights that are hard to see in the code but easy to detect in the compiled documentation:

1. New pages are not added to the table of contents `plantcv/mkdocs.yml` (mkdocs will display a warning when you run the command above).
2. Typos in page or image filenames (mkdocs will often display a warning, or the image may appear as a broken link).
3. Typos in formatting markup lead to incorrectly displayed styling (e.g., not closing bold or italics, not having a blank line before a bulleted or numbered list, etc.).
4. Changes to function input/output signatures or new functions should be added to updating.md.
5. Link out to source code at the end of a documentation page.

### Step 6: Review the code

Once the PR is working as intended, review the code itself for potential improvements. Some common types of improvements to suggest:

1. **Uniform style (linting)**: Most IDEs, including PyCharm and VScode, will indicate when the code does not match the community style guides laid out in PEP8. The command-line tool `flake8` can also be used if an IDE that supports these checks cannot be used. A report from deepsource.io is also generated automatically for each PR.
2. **Code simplification**: Any reorganization that makes the code more succinct, readable, maintainable, etc. In particular, complex, nested logical blocks and for loops increase code complexity (and testing demand). The deepsource.io report may also included automated suggestions for code simplification.
3. **Algorithmic improvement**: Could potentially be an alternate approach that is significantly faster or more user-friendly.
4. **Scope**: Check any new code and test data in the `plantcv/tests` module (no need to fix out-of-scope linting issues).
5. **Documentation**: The documentation is more flexible, but try to avoid very long lines. Markdown will automatically connect consecutive lines that do not have a blank line between them. Crop down images in documentation. In general, resizing such that the image width is equal to 400px will result in a documentation page that is still mobile friendly.
6. **Unnecessary New Test Data**: If new test data was added, check if an existing test data could be used to save memory.

Communicate via GitHub with PR author(s) and directly contribute changes to the PR branch as appropriate until it is ready to merge.

### Step 7: Approve the PR

Once you and the PR author have refined the PR and it is ready to merge, the PR can be approved. It’s important that the PR not be approved until it is actually ready to merge. Please add the name of the Jupyter notebook used for review, in the shared directory (but not a link to it) to the PR approval.
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ nav:
- 'Contributing to PlantCV': CONTRIBUTING.md
- 'Code of Conduct': CODE_OF_CONDUCT.md
- 'Adding/Editing Documentation': documentation.md
- 'Pull Request Review Process' : pr_review_process.md
- "Tutorials": tutorials.md
- 'Workflow Parallelization': pipeline_parallel.md
- 'Exporting Data': db-exporter.md
Expand Down
88 changes: 43 additions & 45 deletions plantcv/learn/naive_bayes.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ def naive_bayes(imgdir, maskdir, outfile, mkplots=False):
channels = {"hue": hue, "saturation": saturation, "value": value}

# Split channels into plant and non-plant signal
for channel in channels.keys():
for channel in channels:
fg, bg = _split_plant_background_signal(channels[channel], mask)

# Randomly sample from the plant class (sample 10% of the pixels)
Expand All @@ -61,22 +61,20 @@ def naive_bayes(imgdir, maskdir, outfile, mkplots=False):

# Calculate a probability density function for each channel using a Gaussian kernel density estimator
# Create an output file for the PDFs
out = open(outfile, "w")
out.write("class\tchannel\t" + "\t".join(map(str, range(0, 256))) + "\n")
for channel in plant.keys():
print("Calculating PDF for the " + channel + " channel...")
plant_kde = stats.gaussian_kde(plant[channel])
bg_kde = stats.gaussian_kde(background[channel])
# Calculate p from the PDFs for each 8-bit intensity value and save to outfile
plant_pdf = plant_kde(range(0, 256))
out.write("plant\t" + channel + "\t" + "\t".join(map(str, plant_pdf)) + "\n")
bg_pdf = bg_kde(range(0, 256))
out.write("background\t" + channel + "\t" + "\t".join(map(str, bg_pdf)) + "\n")
if mkplots:
# If mkplots is True, make the PDF charts
_plot_pdf(channel, os.path.dirname(outfile), plant=plant_pdf, background=bg_pdf)

out.close()
with open(outfile, "w") as out:
out.write("class\tchannel\t" + "\t".join(map(str, range(0, 256))) + "\n")
for channel in plant:
print("Calculating PDF for the " + channel + " channel...")
plant_kde = stats.gaussian_kde(plant[channel])
bg_kde = stats.gaussian_kde(background[channel])
# Calculate p from the PDFs for each 8-bit intensity value and save to outfile
plant_pdf = plant_kde(range(0, 256))
out.write("plant\t" + channel + "\t" + "\t".join(map(str, plant_pdf)) + "\n")
bg_pdf = bg_kde(range(0, 256))
out.write("background\t" + channel + "\t" + "\t".join(map(str, bg_pdf)) + "\n")
if mkplots:
# If mkplots is True, make the PDF charts
_plot_pdf(channel, os.path.dirname(outfile), plant=plant_pdf, background=bg_pdf)


def naive_bayes_multiclass(samples_file, outfile, mkplots=False):
Expand All @@ -98,33 +96,33 @@ def naive_bayes_multiclass(samples_file, outfile, mkplots=False):
# Initialize a dictionary to store sampled RGB pixel values for each input class
sample_points = {}
# Open the sampled points text file
f = open(samples_file, "r")
# Read the first line and use the column headers as class labels
header = f.readline()
header = header.rstrip("\n")
class_list = header.split("\t")
# Initialize a dictionary for the red, green, and blue channels for each class
for cls in class_list:
sample_points[cls] = {"red": [], "green": [], "blue": []}
# Loop over the rest of the data in the input file
for row in f:
# Remove newlines and quotes
row = row.rstrip("\n")
row = row.replace('"', '')
# If this is not a blank line, parse the data
if len(row) > 0:
# Split the row into a list of points per class
points = row.split("\t")
# For each point per class
for i, point in enumerate(points):
if len(point) > 0:
# Split the point into red, green, and blue integer values
red, green, blue = map(int, point.split(","))
# Append each intensity value into the appropriate class list
sample_points[class_list[i]]["red"].append(red)
sample_points[class_list[i]]["green"].append(green)
sample_points[class_list[i]]["blue"].append(blue)
f.close()
with open(samples_file, "r") as f:
# Read the first line and use the column headers as class labels
header = f.readline()
header = header.rstrip("\n")
class_list = header.split("\t")
# Initialize a dictionary for the red, green, and blue channels for each class
for cls in class_list:
sample_points[cls] = {"red": [], "green": [], "blue": []}
# Loop over the rest of the data in the input file
for row in f:
# Remove newlines and quotes
row = row.rstrip("\n")
row = row.replace('"', '')
# If this is not a blank line, parse the data
if len(row) > 0:
# Split the row into a list of points per class
points = row.split("\t")
# For each point per class
for i, point in enumerate(points):
if len(point) > 0:
# Split the point into red, green, and blue integer values
red, green, blue = map(int, point.split(","))
# Append each intensity value into the appropriate class list
sample_points[class_list[i]]["red"].append(red)
sample_points[class_list[i]]["green"].append(green)
sample_points[class_list[i]]["blue"].append(blue)

# Initialize a dictionary to store probability density functions per color channel in HSV colorspace
pdfs = {"hue": {}, "saturation": {}, "value": {}}
# For each class
Expand All @@ -140,7 +138,7 @@ def naive_bayes_multiclass(samples_file, outfile, mkplots=False):
# Create an HSV channel dictionary that stores the channels as lists (horizontally stacked ndarrays)
channels = {"hue": np.hstack(hue), "saturation": np.hstack(saturation), "value": np.hstack(value)}
# For each channel
for channel in channels.keys():
for channel in channels:
# Create a kernel density estimator for the channel values (Gaussian kernel)
kde = stats.gaussian_kde(channels[channel])
# Use the KDE to calculate a probability density function for the channel
Expand Down
4 changes: 2 additions & 2 deletions plantcv/plantcv/analyze_color.py
Original file line number Diff line number Diff line change
Expand Up @@ -172,13 +172,13 @@ def analyze_color(rgb_img, mask, hist_plot_type=None, colorspaces="all", label="

# Store into global measurements
# RGB signal values are in an unsigned 8-bit scale of 0-255
rgb_values = [i for i in range(0, 256)]
rgb_values = list(range(0, 256))
# Hue values are in a 0-359 degree scale, every 2 degrees at the midpoint of the interval
hue_values = [i * 2 + 1 for i in range(0, 180)]
# Percentage values on a 0-100 scale (lightness, saturation, and value)
percent_values = [round((i / 255) * 100, 2) for i in range(0, 256)]
# Diverging values on a -128 to 127 scale (green-magenta and blue-yellow)
diverging_values = [i for i in range(-128, 128)]
diverging_values = list(range(-128, 128))

if colorspaces.upper() in ('RGB', 'ALL'):
outputs.add_observation(sample=label, variable='blue_frequencies', trait='blue frequencies',
Expand Down
8 changes: 3 additions & 5 deletions plantcv/plantcv/analyze_thermal_values.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,11 @@

import os
import numpy as np
from plantcv.plantcv import params
from plantcv.plantcv import deprecation_warning, params
from plantcv.plantcv import outputs
from plotnine import labs
from plantcv.plantcv.visualize import histogram
from plantcv.plantcv import deprecation_warning
from plantcv.plantcv._debug import _debug
from plantcv.plantcv.visualize import histogram
from plotnine import labs


def analyze_thermal_values(thermal_array, mask, histplot=None, label="default"):
Expand All @@ -30,7 +29,6 @@ def analyze_thermal_values(thermal_array, mask, histplot=None, label="default"):
:param label: str
:return analysis_image: ggplot
"""

if histplot is not None:
deprecation_warning("'histplot' will be deprecated in a future version of PlantCV. "
"This function creates a histogram by default.")
Expand Down
37 changes: 18 additions & 19 deletions plantcv/plantcv/crop.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,26 +8,25 @@


def crop(img, x, y, h, w):
"""Crop image.

Inputs:
img = RGB, grayscale, or hyperspectral image data
x = X coordinate of starting point
y = Y coordinate of starting point
h = Height
w = Width

Returns:
cropped = cropped image

:param img: numpy.ndarray
:param x: int
:param y: int
:param h: int
:param w: int
:return cropped: numpy.ndarray
"""
Crop image.

Inputs:
img = RGB, grayscale, or hyperspectral image data
x = X coordinate of starting point
y = Y coordinate of starting point
h = Height
w = Width

Returns:
cropped = cropped image

:param img: numpy.ndarray
:param x: int
:param y: int
:param h: int
:param w: int
:return cropped: numpy.ndarray
"""
# Check if the array data format
if len(np.shape(img)) > 2 and np.shape(img)[-1] > 3:
ref_img = img[:, :, [0]]
Expand Down
24 changes: 12 additions & 12 deletions plantcv/plantcv/hyperspectral/_avg_reflectance.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,21 @@


def _avg_reflectance(spectral_data, mask):
""" Find average reflectance of masked hyperspectral data instance. This is useful for calculating a target
signature (n_band x 1 - column array) which is required in various GatorSense hyperspectral tools
(https://github.com/GatorSense/hsi_toolkit_py)
"""Find average reflectance of masked hyperspectral data instance.
This is useful for calculating a target signature (n_band x 1 - column array) which is required in various GatorSense
hyperspectral tools (https://github.com/GatorSense/hsi_toolkit_py)

Inputs:
spectral_array = Hyperspectral data instance
mask = Target wavelength value
Inputs:
spectral_array = Hyperspectral data instance
mask = Target wavelength value

Returns:
idx = Index
Returns:
idx = Index

:param spectral_data: __main__.Spectral_data
:param mask: numpy.ndarray
:return spectral_array: __main__.Spectral_data
"""
:param spectral_data: __main__.Spectral_data
:param mask: numpy.ndarray
:return spectral_array: __main__.Spectral_data
"""
# Initialize list of average reflectance values
avg_r = []

Expand Down
Loading