Assistance Needed with eBPF Conformance Test: Understanding Offset Calculations for ldxh and ldxw Operations

  Kiến thức lập trình

I’m currently working on an eBPF specification and have encountered some issues due to the lack of documentation. I’m using the conformance tests available in the https://github.com/Alan-Jowett/bpf_conformance/tree/main/tests repository and I’m facing specific difficulties with the subnet test https://github.com/Alan-Jowett/bpf_conformance/tree/main/tests/subnet.data

My main question is about the offset calculation for ldxh and ldxw operations. How are these calculations done and how do they interact with the memory block passed to the program?

In the test, the values loaded into memory by the operations ldxh %r3, [%r1+12], ldxh %r3, [%r1+16], and ldxw %r3, [%r1+16] are 0x0008, 0x3c00, and 0x0201a8c0 respectively. However, the value loaded by the last operation should be 0x0201a8c0 or 0x0101a8c0, given the test result.

What is the justification for the offset in the operation ldxw %r1, [%r1+16] having values of 26 or 30, counting from the beginning of the memory, as per the expected output of the program?

Here is the relevant code from the test:

C

include <stdint.h>

define NETMASK 0xffffff00

define SUBNET 0xc0a80100

struct eth_hdr {

uint8_t eth_src[6];

uint8_t eth_dst[6];

uint16_t eth_type;

};

struct vlan_hdr {

uint16_t vlan;

uint16_t eth_type;

};

struct ipv4_hdr {

uint8_t ver_ihl;

uint8_t tos;

uint16_t total_length;

uint16_t id;

uint16_t frag;

uint8_t ttl;

uint8_t proto;

uint16_t csum;

uint32_t src;

uint32_t dst;

};

uint64_t entry(void *mem)

{

struct eth_hdr *eth_hdr = (void *)mem;

uint16_t eth_type;

void *next = eth_hdr;

if (eth_hdr->eth_type == __builtin_bswap16(0x8100)) {

struct vlan_hdr *vlan_hdr = (void *)(eth_hdr + 1);

eth_type = vlan_hdr->eth_type;

next = vlan_hdr + 1;

} else {

eth_type = eth_hdr->eth_type;

next = eth_hdr + 1;

}

if (eth_type == __builtin_bswap16(0x0800)) {

struct ipv4_hdr *ipv4_hdr = next;

if ((ipv4_hdr->dst & __builtin_bswap32(NETMASK)) == __builtin_bswap32(SUBNET)) {

return 1;

}

}

return 0;

}

Here is the relevant ASM section and the initial memory:

— asm

mov %r2, 0xe

ldxh %r3, [%r1+12]

jne %r3, 0x81, L1

mov %r2, 0x12

ldxh %r3, [%r1+16]

and %r3, 0xffff

L1:

jne %r3, 0x8, L2

add %r1, %r2

mov %r0, 0x1

ldxw %r1, [%r1+16]

and %r1, 0xffffff

jeq %r1, 0x1a8c0, exit

L2:

mov %r0, 0x0

exit

Initial memory:

00 00 c0 9f a0 97 00 a0

cc 3b bf fa 08 00 45 10

00 3c 46 3c 40 00 40 06

73 1c c0 a8 01 02 c0 a8

01 01 06 0e 00 17 99 c5

a0 ec 00 00 00 00 a0 02

7d 78 e0 a3 00 00 02 04

05 b4 04 02 08 0a 00 9c

27 24 00 00 00 00 01 03

03 00

Expected result: 0x1

Could someone help me understand these calculations and how they affect the test result?

I tried tracing the possible values for the offsets and the memory loads, but I’m still having trouble understanding how the specific values are determined and why they result in the expected output. I expected to see the loaded values directly matching the memory content or understanding a clear pattern for the offset calculations, but this isn’t clear to me. Any help in clarifying these calculations and their impact on the test result would be greatly appreciated.

New contributor

Marcos Geraldo is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

Theme wordpress giá rẻ Theme wordpress giá rẻ Thiết kế website

LEAVE A COMMENT