-
Notifications
You must be signed in to change notification settings - Fork 2
Logging vs. Exceptions vs. Result Objects #2
Comments
I like this problem. Curious why you didn't specialize exceptions @dbwiddis? It feels to me that if I am, for example, in need of elevated permissions to get X, I should get an |
The concept of API extensions for different platforms like @YoshiEnVerde suggested solves the When there's a permissions error or missing software, then the |
As you may recall when you recruited me, I'm an amateur/hobbyist programmer without formal training and with a limited skill set. I didn't even know I could specialize exceptions when I got started. :) This project has been a great learning experience for me and now that I know better.
But that removes the platform-independent API that is the core of the project. I want my |
Both Since Windows is missing load average, it unfortunately cannot be included in the base type |
With one of the objectives for this version being decoupling the fetching from the API, we need to think of both parts with different design eyes. With OSHI trying to both have a small footprint and being performant (which, most of the time, are mutually exclusive things), we can use this to tweak details for the best. For example: Using immutable objects is great for concurrency and security, but a nightmare on the memory resources, even with caching mitigating things a bit. The opposite is also true: A very common solution for Java systems that need to keep resource consumption to a minimum is to offload the costs from the memory to the CPU. This is done by saving all the data in the cheapest representation possible, then casting/parsing the data every single time a read/write op is requested. This second approach is of note for all our info sources that involve parsing OS command results. Specially so, since we could also keep a cache in the next layer for the parsed values themselves, and the data updates will be manually triggered. |
+1 for using basic inheritance (@cilki's example with CPU) to avoid having fields that just don't exist on a specific OS |
I'm not so fond of using inheritance to solve this problem, since Java doesn't support multiple inheritance and it's a more finely grained issue than OS, and exceptions are common. Windows doesn't have load average, so in the example above, we create a *nix parent. But macOS doesn't have Disk Queue Length (so do we create a parent of all OS's except Mac for the Disks?) Mac doesn't have IOWait and IRQ information, and Windows doesn't have Nice and IOWait, so should we have different array sizes here? (or is returning "0" okay as we currently do?) Some stuff is just insanely slow on Windows (counting file handles) so we omit it, while letting the user know it's just "moderately" slow on *nix systems. Some features are version-dependent (particularly newer vs. older Windows versions or 32- vs. 64-bit) or language dependent, or dependent (especially on *nix) on whether the user has root priviliges (e.g., There's stuff that we'd like to put in (Windows Services) that has more information (running/stopped) than the equivalent in other OS's (which are just a list of text strings and don't correlate to process IDs). Then there's the Windows Sensor stuff, that pretty much relies on OpenHardwareMonitor running in the background, which can in theory be turned on and off while OSHI is running. If it's on, we can return values for some sensors, but what do we return if we only have temperature but not fan speed? If it's off (or OHM doesn't detect fans) do we re-generate a new object with different inheritance that doesn't have a I could go on, but the point is, we can't solve all of these problems with inheritance and if we do OS-specific code we violate DRY a lot. We have to have a way have a common command across most OS's that we can have a standardized "not on this OS" response. |
The gist of my above comment is that I'd prefer:
|
I love the detailed examples here @dbwiddis. ... scratching my head mostly at this point :) |
The main problem with not using inheritance is that it defeats pretty much 90% of the point of using OSHI, outside of not having to bother implementing your own calls/parsers for the system values. Any use case that depends on getting a specific value that can only be recovered in a specific O.S. for a specific architecture, under a specific hardware stack, voids every single point of having a common library. If you need an ID that can only be gotten in Windows, for a specific windows-only type of service, that can only be gotten in 32bit archs, for a SPARC machine, then you'd be best served just doing a straight call for the service, instead of importing OSHI. The idea behind having inheritance is that the 85% of the data that is, relatively, common between all systems should be accessible in three or four lines, without ever even caring what tech stack is under you. That's also the point of a standardized API, to remove the complexity of all the details, and to abstract the implementation, for the user. If you need to manually check which OS you're working under, because you have special use-cases for each stack, then you're already doing that check by hand, you can just as well do a manual cast for the returning object, or call a more specialized API (that we would also be providing). |
As counter examples:
|
The point is that we should have multiple layers of interaction:
|
Matter of fact is that, currently (OSHI3) there is NO real common interface. Because we have elements in there that are OS specific (or stack specific), they fail if the OS/Stack is not the expected one, and they basically require the user to already know what they were going to find before calling OSHI, in the end the common interface is just a fallacy. Even worse, because of that common interface, the values returned are inconsistent across platforms. For example, the issue that originally brought me to OSHI: For one OS GetSerialNumber() returns the OS serial number, for another OS it returns the MOBO serial number, for another it returns the CPU's. |
What we should do is define what values we consider to be standard, and which type they should be. For example:
So, for 9 OSs, when the method is called, the driver/cache is queried, a value is recovered, then cast into a string, set into a Success Return Object, and returned. |
So, why allow this for values that are in most stacks, but not for values that are in specific stacks? If we did the same for a value that is only available in 64bit Debian Linux (for example), then every user of the API will need to: A. Check that every single method they try to use will conform to their intended use cases If we standardize, then offer the capability of promoting to specialized APIs, everybody can follow the logic of: |
Did you mean to say not there? This isn't the case we're discussing. We're discussing a value that is common across most OS's but missing on one or a few. (e.g., Load average). By using the above inheritance pattern (a Your counter examples are specifically addressed by the current common interface-based API implementation. The user should not care what OS they are on when they ask for a Load Average. The user should be notified (via exception or sensible default or documented invalid value such as a negative number) if what they ask for isn't provided by the system they are monitoring.
This is exactly what I'm proposing, except with more options/detail than just "unavailable". There's an "unavailable now but try again" vs. "unavailable ever on this system" vs. "unavailable but if you run with elevated permissions it might be" vs. "unavailable but if you install this package or run this third party software I can use it". |
No it shouldn't. If you're talking about a value that is there for almost every OS, but a few, and we consider it to be standard enough to add it to the Standardized API, then for that OS it would return a failure by not supported. If OSHI fails because of issues unrelated to that, it should return the corresponding error: i.e.: failure by not available, failure by forbidden, etc |
Single inheritance in Java is not an error, it's part of the design. We've been working with that for over 20 years. That's why Interfaces were added.
|
I agree -- I would much rather provide load average, and do something different for Windows only. I would much rather provide Disk Queue length and do something different for macOS only. This is the philosophy I've went with on 3.x, returning documented values when not available (e.g., 0). I have a feeling we're saying the same thing here, but getting wrapped up in how "inheritance" applies to this paradigm. I don't see a need for inheritance in the interface-exposed classes. There's some value in inheritance to remove redundancy in the access/drivers (we have common JNA-based unix functions, for example). |
LOL, yes, I get the same feeling |
As a last quick caveat:
Here's the issue we're having: |
For your next trick, tell me how I should handle per-core CPU frequency. :) (We currently have a CPU interface but should consider having "physical processor" objects on that, which have frequency, and "logical processor" objects on those physical processors. :) |
If both are true, then we just need to emulate the same architecture in our objects:
If one of the two answers is a no, then we only implement the design for the valid one (if any) in the Standardized API, and we implement the others in each stack, when applicable |
The main thing is that we can have multiple layers to promote to, not just OS specific. For example, let's say we have a 50-50 split on the processors example (or even an 8:2 ratio, so long as at least 2 stacks share a functionality) Then, we can have the following (impl) interfaces:
The Standardized interface CPU might define a very basic method Then, the other two would have a set of methods Thus, if the user wants more granular details, they might do:
|
That would definitely achieve the granularity required, but also entails many more interfaces for the user to know about. Maybe that can be abstracted away from the usage, but I think it would get fairly complicated internally. Will need some time to process this. |
Yes, it would. The way I'm imagining this is that we can have a tutorial/how to/getting started doc that explains the standardized API, without all the bells and whistles. Then, we have a kind of Advanced Manual with all the more granular interfaces. The idea would be that the standardized interfaces should cover most use cases for most users, leaving the few users that need a specific functionality with checking the advanced manual (or the full JavaDoc ref) for the specific interface they might need. |
Linux, FreeBSD, and Solaris have per-logical-core values (although the reality is the source value is per-physical core). Windows and MacOS have one singular value for the whole CPU which would be a reasonable default to replicate across all processors.
Which was more the point of my question. In the real world, we have a logical processor ("soft"ware object) which is one of maybe multiple on a physical processor (a.k.a. core, a hardware object) which is one of maybe multiple on a package (a.k.a. socket, a hardware object) which is one of maybe multiple on the overall CPU. Software (e.g., /proc/cpu) returns the current OSHI per-processor output (cpu ticks) at the logical processor level but properly evaluating load requires comparison at the physical processor (core) level... e.g., the sum of ticks on processor 0 and 1 are what I care about; so I would like the getCore() method of the logicalProcessor object to return "core 0" simply by evaluating that method on its parent object. Similarly "core 0" shares its currentFrequency with the package it's on so getPackage() on the core should return "package 0". And all those can get the CPUID info from getProcessorId() on their parent. Or is creating 8 logical processor objects, each with its own CPU ticks, overkill when I have easy access to a 2D array with that info at the top level? |
If we can set up some simple rules/generalities on how to do this, we can follow the code first standardize later approach:
What we need the most for this approach is a well defined procedure for adding functionalities. That way, we avoid the tangle that comes out of the same functionality returning an Something like:
|
The problem is that You're only thinking of the frecuencies there, and then it would be more performant that way. However, if you actually have a dozen values per processor (S/N, Frecuency, Thermals, Type, Enabled, State, etc), you're now talking about a dozen type[][] that you have to manually handle each time, instead of just a collection of processors. More so, we come back to the original question: Is this something that is standard for all stacks? Or specific to a select few? |
An easy case. Or not really. What if the boolean result is "unknown"?
Do you literally mean the Java 'Number' object? Because we have access to UINT64 data that should be properly returned as a BigDecimal but right now we just strip the sign bit and return a
Good point. We probably don't need the whole inheritance structure and can just do a
|
Be aware that I'm not actually saying this should be the exact way we should do this. I tend to design for higher level solutions, so I use more abstraction than the project might need. I'm more interested in setting things up in a way we can later do this kind of thing. If we're going to decouple fetching from API and Caching, there's nothing keeping us from doing a low-level API and a high-level API at the same time:
|
Then, by mathematical definition, it's not a boolean result.
Nah, just used Numeric to avoid having to list every single numeric type we have access to. We can use primitives, boxed, atomics, whatever corresponds to the type needed.
The main detail here is that, with the API and the Drivers decoupled, there is no reason for the API to mirror anything in the real machine. We only need to know how to map between them. The API should be designed to be usable by the users, to be functional to whatever we decide to do with this version of the library. For example, in that Processor detail you give: At most, you just add an ID for the Processor it belongs to, and the user can use that to filter the PhysicalProcessor list. Also, don't ofrget that, if the only thing we take from the PhysicalProcessor interface is the List it contains, we can just as well have the full list of all logical processors in the main parent interface, and add that PhysProc ID to the LogProc in some ParentProcessorId field, or something |
So the high-level API uses the low-level and is also the only part visible to the user? Or does the user choose which to use according to caching requirements? |
If I wanted to calculate over- or under-clocking ratio. Say my CPU (as reported by the
I'm not duplicating the value. I'm either using inheritance to access the parent CPU's |
What I meant with my question was not what you'd do with the info itself, but why you'd want to access it through the LogicalProcessor object. If there's a chance different LogProcs within the same PhysProc might have different values for Frecuency, then we could consider the field intrinsic to the LogProc, add it in, and in the cases all LogProcs share the same Frecueency as their parent PhysProc, we'd load the value from there. Another detail on this issue would be that I don't think we should do inheritance between the PhysicalProcessor and the LogicalProcessor classes. They're not Parent/Child, they're both Siblings under the main Processor class (and, even then, they might all be siblings under a specific interface, but not hierarchically linked) |
So that I could have a
Agreed. Actually, I think an inner class might be the correct answer here. A |
Ahh, the issue is that you're putting the results and the values in the same-ish bag.
How I've been thinking of this is by using those multiple layers we've been talking about to separate logic from entities, and entities from values. Basically, we have the following layers:
|
As an example, using the current OSHI3 elements: We have a class Now, in OSHI5, we would have a Meanwhile, the So, checking the usage over time of a single NetIF, for example, would mean something like:
|
Early in OSHI's development, the code was littered with
UnsupportedOperationExceptions
. This seems an appropriate response when we're asking for data that simply doesn't exist (e.g., load average on Windows). But it requires the user to explicitly handle these exceptions.I moved away from exceptions into log messages, which allowed a more finely grained control of what was a normal problem vs. failure (warnings vs. errors etc.) and just returned sensible defaults (negative values, or 0, or empty strings, or empty collections). This introduces the opposite problem, of not allowing the user to handle exception types.
OSHI needs a standardized method of handling these types of situations:
We can use
Optional
results in key places, or encapsulate the result in our ownOshiResult
class that includes:The text was updated successfully, but these errors were encountered: