Replies: 1 comment 3 replies
-
Hey @t-taichi! Second, I'll do my best to answer, but I may not be understanding exactly what part you are asking about. First, it creates a matrix of m/z vs charge. Each m/z data point has a charge vector. The goal is to distribute the intensity from the m/z data across the available charge states. Ideally, we will have one major charge state with all the intensity. Alternatively, we can have multiple charge states when peaks overlap. The way UniDec does this is by first assuming all charge states in the vector are equally likely. Then, it looks for neighboring charge states at different m/z values. If it sees intensity there, it will raise the intensity for that (m/z, z) point in the matrix. After one iteration of that, it creates a simulated spectrum, compares it with the original data, and then scales each vector to make the simulated data match the actual data. Then, it iterates again with adjusting each (m/z, z) point based on the neighboring (m/z+1, z+1) and (m/z-1, z-1) points, creating another simulated spectrum, and rescaling the whole vector. It reaches convergence when it hits the max iteration limit or stops changing between iterations within some cutoff (I think 0.0001%). Let me know what other questions you have. Thanks! |
Beta Was this translation helpful? Give feedback.
-
Hello. I am a student and would like to understand the details of UniDec's processing.
In UniDec's basic processing, after copying the probabilities of m/z to all charges on the charge axis, I think there is a phase of "allocating peaks to the specific charge".
I couldn't find what this phase was doing in the paper, and I didn't know where it was in the code.
Could anyone please tell me more about it?
Beta Was this translation helpful? Give feedback.
All reactions