Python - Converting CSV to Objects - Code Design

2024/9/16 18:24:04

I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data.

We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees.

import csv
import re
import sys#class CSVLoader:
#    """Virtual class to assist with loading in CSV files."""
#    def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'):
#        gd_extract = csv.DictReader(open(input_file), dialect='excel')
#        employees = []
#        for row in gd_extract:
#            curr_employee = Employee(row)
#            employees.append(curr_employee)
#        return employees
#    #self.employees = {row['dbdirid']:row for row in gd_extract}# Previously, this was inside a (virtual) class called "CSVLoader".
# However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-function but with a module-level function
def import_gd_dump(input_file='Gp Directory 20100331 original.csv'):"""Return a list ('employee') of dict objects, taken from a Group Directory CSV file."""gd_extract = csv.DictReader(open(input_file), dialect='excel')employees = []for row in gd_extract:employees.append(row)return employeesdef write_gd_formatted(employees_dict, output_file="gd_formatted.csv"):"""Read in an Employees() object, and write out each Employee() inside this to a CSV file"""gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname')try:gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel')except IOError:print('Unable to open file, IO error (Is it locked?)')sys.exit(1)headers = {n:n for n in gd_output_fieldnames}gd_formatted.writerow(headers)for employee in employees_dict.employee_list:# We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice?gd_formatted.writerow(employee.__dict__)class Employee:"""An Employee in the system, with employee attributes (name, email, cost-centre etc.)"""def __init__(self, employee_attributes):"""We use the Employee constructor to convert a dictionary into instance attributes."""for k, v in employee_attributes.items():setattr(self, k, v)def clean_phone_number(self):"""Perform some rudimentary checks and corrections, to make sure numbers are in the right format.Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number."""if self.telephoneNumber is None or self.telephoneNumber == '':return '', 'Missing phone number.'else:standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})')extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})')missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})')if standard_format.search(self.telephoneNumber):result = standard_format.search(self.telephoneNumber)return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), ''elif extra_zero.search(self.telephoneNumber):result = extra_zero.search(self.telephoneNumber)return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. 'elif missing_hyphen.search(self.telephoneNumber):result = missing_hyphen.search(self.telephoneNumber)return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. 'else:return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumberclass Employees:def __init__(self, import_list):self.employee_list = []    for employee in import_list:self.employee_list.append(Employee(employee))def clean_all_phone_numbers(self):for employee in self.employee_list:#Should we just set this directly in Employee.clean_phone_number() instead?employee.PHFull, employee.PHFull_message = employee.clean_phone_number()# Hmm, the search is O(n^2) - there's probably a better way of doing this search?def lookup_all_supervisors(self):for employee in self.employee_list:if employee.hrreportsto is not None and employee.hrreportsto != '':for supervisor in self.employee_list:if supervisor.hrid == employee.hrreportsto:(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.snbreakelse:(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.')else:(employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.')#Is thre a more pythonic way of doing this?def print_employees(self):for employee in self.employee_list:print(employee.__dict__)if __name__ == '__main__':db_employees = Employees(import_gd_dump())db_employees.clean_all_phone_numbers()db_employees.lookup_all_supervisors()#db_employees.print_employees()write_gd_formatted(db_employees)

Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound?

Anyhow, to the specifics:

  1. The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad?
  2. Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this?
  3. In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself?

There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes.

I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related).

Cheers, Victor

Answer

Looks fine to me. Good job. How often are you going to run this script? Most of your questions are moot if this is a one-off thing.

  1. I like the way Employees.cleen_all_phone_numbers() delegates to Employee.clean_phone_number()
  2. You really should be using an index (dictionary) here. You can index each employee by hrid when you create them in O(n) and then look them up in O(1).
    • But only do this if you ever have to run the script again...
    • Just get into the habit of using dictionaries. They are painless and make code easier to read. Whenever you write a method lookup_* you probably just want to index a dictionary.
  3. not sure. I like explicitly setting state, but this is actually bad design - clean_phone_number() should do that, Employees should be responsible for their own state.
https://en.xdnf.cn/q/72868.html

Related Q&A

Python multithreading - memory not released when ran using While statement

I built a scraper (worker) launched XX times through multithreading (via Jupyter Notebook, python 2.7, anaconda). Script is of the following format, as described on python.org:def worker():while True:i…

Delete files that are older than 7 days

I have seen some posts to delete all the files (not folders) in a specific folder, but I simply dont understand them.I need to use a UNC path and delete all the files that are older than 7 days.Mypath …

Doctests: How to suppress/ignore output?

The doctest of the following (nonsense) Python module fails:""" >>> L = [] >>> if True: ... append_to(L) # XXX >>> L [1] """def append_to(L):…

Matplotlib not showing xlabel in top two subplots

I have a function that Ive written to show a few graphs here:def plot_price_series(df, ts1, ts2):# price series line graphfig = plt.figure()ax1 = fig.add_subplot(221)ax1.plot(df.index, df[ts1], label=t…

SQLAlchemy NOT exists on subselect?

Im trying to replicate this raw sql into proper sqlalchemy implementation but after a lot of tries I cant find a proper way to do it:SELECT * FROM images i WHERE NOT EXISTS (SELECT image_idFROM events …

What is the correct way to obtain explanations for predictions using Shap?

Im new to using shap, so Im still trying to get my head around it. Basically, I have a simple sklearn.ensemble.RandomForestClassifier fit using model.fit(X_train,y_train), and so on. After training, Id…

value error when using numpy.savetxt

I want to save each numpy array (A,B, and C) as column in a text file, delimited by space:import numpy as npA = np.array([5,7,8912,44])B = np.array([5.7,7.45,8912.43,44.99])C = np.array([15.7,17.45,189…

How do I retrieve key from value in django choices field?

The sample code is below:REFUND_STATUS = ((S, SUCCESS),(F, FAIL) ) refund_status = models.CharField(max_length=3, choices=REFUND_STATUS)I know in the model I can retrieve the SUCCESS with method get_re…

Errno13, Permission denied when trying to read file

I have created a small python script. With that I am trying to read a txt file but my access is denied resolving to an no.13 error, here is my code:import time import osdestPath = C:\Users\PC\Desktop\N…

How to write large JSON data?

I have been trying to write large amount (>800mb) of data to JSON file; I did some fair amount of trial and error to get this code:def write_to_cube(data):with open(test.json) as file1:temp_data = j…