-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About priority_encoder #52
Comments
Q1: Q2: When |
In this part, when LSB_HIGH_PRIORITY is set to 1, stage_enc[0][n] = !input_padded[n*2+0], why is input_padded inverted and only even bits are taken? but when LSB_HIGH_PRIORITY is 0, it does not need to be inverted and only odd bits are taken. // compress down to single valid bit and encoded bus |
Forget about "even" and "odd". The idea is the code is processing bits in pairs. For each pair of inputs, it produces one additional encoded output bit, along with a valid bit indicating if there was a bit set in that portion of the input. Think about what the truth table for this looks like. If the high priority bit is set, the other bit is irrelevant, so for the encoding part only needs to consider one bit, and which bit is considered depends on which end of the input is the higher priority. If it's set for LSB high priority, then if the LSB (which is index 0) is set, then the encoded output should be 0, otherwise it should be 1, hence the value of the LSB should be inverted to produce the encoded output. For MSB high priority, if bit 1 is set then the output should be 1, otherwise the output should be 0, hence it is not inverted. This module was actually originally written with a recursive structure that might be easier to understand, but causes some tool issues with synthesis: https://github.com/alexforencich/verilog-axi/blob/d694a6719013242680e3aa3bd49092ff724f157e/rtl/priority_encoder.v . The current version implements the exact same logic, but with a loop instead of recursion. |
It's clear to me now, thank you for your explanation. |
Hi, Sir. I am studying the priority_encoder module and I have some confusion about the following code. I would appreciate it if you could provide me with some explanation
Q1:Why do you use the value of stage_valid[l-1][n2+0] to make the selection when LSB_HIGH_PRIORITY is set to 1? I believe that in the code above, you compressed two requests into one valid value with the line "assign stage_valid[0][n] = |input_padded[n2+1:n2];", so why do you skip the odd bit and use the [n2+0] bit here? What I understand is that when LSB_HIGH_PRIORITY is set to 1, it means that the priority is given to the lower bits. However, I don't see the relation between this code and the priority.
Q2:When Width is 4, LEVELS equals 2. Then 'stage_enc[l][(n+1)(l+1)-1:n(l+1)]' is 2 bits, and '{1'b0, stage_enc[l-1][(n*2+1)l-1:(n2+0)*l]}' is 3 bits. This means that the bit width is not matched. Am I misunderstanding something?
The text was updated successfully, but these errors were encountered: