Сomparing images by hash

  Kiến thức lập trình

I made a program to distinguish similar images using two methods – constructing a perceptual hash using the average and constructing a hash using the discrete cosine transform. The first algorithm is considered less accurate; it gave a result that did not seem good enough to me. However, the result obtained by the second method is even less accurate. How can this be and is it possible to fix something?

This is first code (just use average):
`res = cv2.resize(image, (32, 32))
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)

average = np.mean(gray)
_, bin_img = cv2.threshold(gray, average, 255, cv2.THRESH_BINARY)


hash_value = 0
for i in range(32*32):
    y = i // 32
    x = i % 32
    if bin_img[y, x] > 0:
        hash_value |= 1 << i`

This is second (with DCT):

` res = cv2.resize(image, (32, 32))
#cv2.imshow(“object”, res)
gray = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)

average = np.mean(gray)

dct=dctTransform(gray)
cut=dct[1:9, 1:9]
_, bin_img = cv2.threshold(dct, average, 255, cv2.THRESH_BINARY)


hash_value = 0
for i in range(8*8):
    y = i // 8
    x = i % 8
    if bin_img[y, x] > 0:
        hash_value |= 1 << i`


Is it possible to somehow fix the second code so that it actually works more efficiently than the first?

New contributor

vinea is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

Theme wordpress giá rẻ Theme wordpress giá rẻ Thiết kế website

LEAVE A COMMENT