OO vs FP: What is a good approach to understanding if heavy wrapper classes should be used?

  softwareengineering

Consider a processing system which ingests objects from an external source and performs extensive processing. One example could be objects detected by a computer vision system which are then fed into a security alert system that looks for particular movement behaviors, and abnormal positioning.

The external object provides its position in its preferred reference frame. My code needs to transform that position into one that is consistent with the processing system.

One OO approach would be to build a “heavy” wrapper class which internally transforms the position. Heavy since it contains or calls code for the transform instead of blindly passing the external object fields through. Benefit here is self-contained caching is possible.

The FP approach would be to build a static function which accepts the object and computes the transformed position. If this function is called many times it might make sense to cache the computed position, and in the FP world that cached position would belong in a separate lookup structure.

Here’s some Python pseudocode:

OO approach:

import visionlib as v

class WrappedObject:
    def __init__(self, external_object):
        self.position = Utils.transform_frame_a_to_b(external_object.get_position())
    def get_position(self):
        return self.position

def get_objects_in_frame():
    return [WrappedObject(obj) for obj in v.get_sensed_objects()]

def update_tracks(object_list):  # from ()

    # Lots of loops, specialized processing, optimization, statistical calculations, etc.
    # Calls WrappedObject.get_position() thousands of times per frame
    ...
        ...
        pos = obj.get_position()
        ...
    return object_tracks

# Call
tracks = update_tracks(get_objects_in_frame())

Functional approach:

import visionlib as v

class WrappedObject:
    def __init__(self, external_object):
        self.obj = external_object
    def get_position_in_frame_a(self):
        return self.obj.get_position()

def get_objects_in_frame():
    return [WrappedObject(obj) for obj in v.get_sensed_objects()]

def get_object_positions(object_list):
    return dict([obj.id, Utils.transform_frame_a_to_b(obj.get_position_in_frame_a())
                for obj in object_list])

def update_tracks(object_list, object_positions):

    # Lots of loops, specialized processing, optimization, statistical calculations, etc.
    # Calls WrappedObject.get_position() thousands of times per frame
    ...
        ...
        pos = object_positions[obj.id]
        ...
    return object_tracks

# Call
object_list = get_objects_in_frame()
object_positions = get_object_positions()
tracks = update_tracks(object_list, object_positions)

How can this problem be approached? Both solutions work and the FP is more scalable, but what are some considerations and what is generally a good approach to determining if a group of data should be object-ized and wrapped, or if it’s more scalable to build a processing framework that can be easily adjusted as more fields are added and removed?

5

There is no objective approach to make a decision like this one.

Both of your examples simply reflect different schools of thought, with some pros and cons, but nothing which cannot typically be mitigated when switching between those worlds.

So the design of such a functionality or library is mainly dependend on what the designer is used to, what they prefer, and what they know about their target audience who will use the objects. Of course, it can depend to some degree on the surroundings and contexual requirements which might give the properties you mentioned (like better scalabilty vs better encapsulation) a different weight. But at the end of the day, it remains a decision you can ask two different experts and get two different opinions.

3

LEAVE A COMMENT