Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

overhaul python error reporting for structured failure #7

Open
wants to merge 24 commits into
base: main
Choose a base branch
from

Conversation

CesiumLifeJacket
Copy link
Contributor

For my own project, I have updated Rx.py to be compatible with both Python 2 and 3. I have also rewritten a lot of the code to be slightly more concise/modern/pythonic.

The other major feature is that the types' check() methods now have an augmented version called validate(), which instead of simply returning True or False, raises a SchemaMismatch exception with an informative error message when the value doesn't match the schema.

I've also changed the behavior of Factory.__init__() to register core types automatically, and take register_core_types as a boolean argument instead of as a key in a dictionary. I am unsure why this argument was a dictionary in the first place, and what this afforded the user, so this may be a change worth reverting.

@rjbs
Copy link
Owner

rjbs commented Aug 11, 2015

Thanks very much for this! I wanted to let you know that I have received it and look forward to reviewing it, but just haven't made the time yet.

@rjbs
Copy link
Owner

rjbs commented Sep 1, 2015

So, as usual, I am late in delivering. I'm afraid some conferencing got in the way.

I'm very pleased to see Python 3 compatibility added, but I have to balk at the structured errors. It's not that I don't want structure failure information — I do, I do! — it's that the spec tests have added information on how failures should be reported. They're implemented by the assert_valid method in Perl, and I'd really like for the reports to be the same in Python.

If you're stoked at the chance to implement them, that would be excellent. I would be pleased to help. If not, I'll probably merge at least the changes to make this Python 3 compatible, and hope to make time ot do the structure failure information myself.

@CesiumLifeJacket
Copy link
Contributor Author

Ack, this is what I get for not reading all the documentation. I'm pretty free this week so I will get these errors implemented properly.

@CesiumLifeJacket
Copy link
Contributor Author

I just started reviewing the Perl code that implements the structured failures, and maybe it's because I'm unfamiliar with Perl, but I'm not getting a feel for these error structures or how best to translate them into Python. Do you have any language-agnostic specification for how exactly these should work? I saw the spec.pod file but would like a more detailed description.

@rjbs
Copy link
Owner

rjbs commented Sep 3, 2015

I wanted to note, too, that you are not at fault for missing some sort of glaring documentation. This was done (nicely, I think) in Perl, but there was no big spec update or anything, or big TODO documentation. It just sort of sat there.

So, it's been a while since worked on this, so I'm going to sort of give you a unstructured braindump, and hope it's useful.

  • some schemata are leaves, and likely to just give you a simple good/no-good with a reason
  • others are like trees that can give you a bunch of reasons, based on all their leaves below

For a good example of a leafy type, look at spec/schemata/int-range.json. The fail entry of the spec test shows that in every case, we either say "rejected because of type" or "rejected because of range" (meaning value). It's also a good example of how the heck a basic spec test shows failure. data and check are empty because there's no path to the input or the subcheck. There's just an error.

spec/schemata/array-3-int.json is a simple intro to a tree-like check. We start to see data and check have values. data tells us how to find the piece of the input that was no good. When we're looking for up to three integers, and the 0th element is a string, data is 0, because the 0th datum was no good. When the 1th element is a string, data is 1, etc.

(At this point, I am panicking, because I can't force any tests to fail to double-check things. New task for rjbs: figure out what key fact I have forgotten in the last year!)

A more complex tree-like check is spec/schemata/multi/seq-ii-2bools.json. It explains that check is a drill-down path to the subcheck that failed. When check is [ "contents", 0 ], it means that there's a "contents" check, which has n elements, and the 0th one was no good.

The same thing, by the way, applies to data, showing a drill-down path to the deep datum that failed.

These deep paths are important because they let you get the "data path" and "check path" of a failure. These are critical. You can see an example of what you can, and why it's so important, at http://rjbs.manxome.org/rubric/entry/1743

You end up getting, in Perl, a stringified exception that says something like "the data at ->[0]->{foo}->[1] fails the check at ->{contents}->[0] because of 'range' violation"

I hope this was useful. If not, please let me know and I will try again! Also, let me know if I've just left out something helpful.

Tomorrow, I hope I can fix the weird problem I had with making tests fail! If I can do that, I can add a few more demonstrative tests!

@CesiumLifeJacket
Copy link
Contributor Author

The path data structures make sense to me in a context where only one subcheck fails, but what does the error structure look like in more complicated situations where, say, the 0th and 1st data are both no good, or a rec schema has an unknown key and an int outside the accepted range? Or a data structure fails to meet any of the requirements in an any schema, possibly for even more complex, multi-error reasons? Do you just return the first error encountered?

I apologize if all this should be obvious by reviewing the Perl code, languages that demand you prefix your variable names with special characters give me the willies, so thank you for putting it all in natural language. :)

@rjbs
Copy link
Owner

rjbs commented Sep 3, 2015

(…come to the Perl side … but not until you finish the Python updates… 😉)

For an extra-complex failure, check out schemata/multi/rec-key-opt-rest.json. What a name!

It's this:

---
type: //rec
required: { key: //int }
optional: { opt: //bool }
rest: { type: //map, values //bool }

In other words, it's a dict where you must have an int as the value for "key", you may have an "opt" that's a bool, plus any number of other entries with bool values. (This would be more interesting if "opt" had a non-bool type, but so it goes.)

You can fail this in a bunch of ways. You could have something bad for "opt" or for "key" or you could be missing "opt" or you could have an entry for "foobar" with a non-bool value.

The tests run this schema against the input obj/opt-pants-rest-pants which looks like this:

{ "opt": "pants", "rest": "pants" }

…and which has three obvious problems:

  • key is missing
  • opt is not a bool
  • rest (which could have been named anything) is not a bool

Dutifully, we expect three failures, as we see in the first file I mentioned:

      "opt-pants-rest-pants": {
        "errors":
          [
           {
            "data": [ ],
            "check": [ ],
            "error": [ "missing" ]
           },
           {
            "data": [ "opt" ],
            "check": [ "optional", "opt" ],
            "error": [ "type" ]
           },                                                                              {
            "data": [ "rest" ],
            "check": [ "rest", "values" ],
            "error": [ "type" ]
           }
          ]
      }
  1. There is a missing value. (Why is there no check/data for this? I'd need to double-check.)
  2. the type for opt datum fails the optional/opt check
  3. the type for rest datum fails the rest/values check

For the question about //any, I'm going to force you to look at a little Perl. This is the assert_valid routine from the Perl implementation of //any. (BTW, I'd like to keep method names matchy across implementations if possible, so assert_valid in Python, to.)

sub assert_valid {
  return 1 unless $_[0]->{of};

  my ($self, $value) = @_;

  my @failures;
  for my $i (0 .. $#{ $self->{of} }) {
    my $check = $self->{of}[ $i ];
    return 1 if eval { $check->assert_valid($value) };

    my $failure = $@;
    $failure->contextualize({
      type       => $self->type,
      check_path => [ [ 'of', 'key'], [ $i, 'index' ] ],
    });

    push @failures, $failure;
  }

  $self->fail({
    error    => [ qw(none) ],
    message  => "matched none of the available alternatives",
    value    => $value,
    failures => \@failures,
  });
}

It tries each subcheck. As soon as one matches, it returns 1 — any match is a total success! Otherwise, it accumulates each failure as it goes. The contextualize method on a failure adds path and check data to it. As failures propagate upward out of a deep check, they are contextualized at each level, so that by the time they reach the top, they have all the context required.

After all the contextualized failures are collected in the array @failures, a single "multi-failure" is thrown. That's going to have the type "none" and then subfailures of each one. The normal string form of such an error will then show you every way in which you failed, just like a cruel teacher.

Let me know if this did or didn't help. The tests here are not as comprehensive as they could be, which makes them unclear. I should update the spec, of course. I still use Rx heavily in Perl, and lightly in some other languages, but because the Perl implementation is done, I have been lax on touching anything else. I keep finding out other people use it, though, so I really need to make the time!

@CesiumLifeJacket
Copy link
Contributor Author

There are some aspects of this that I still don't understand. How do the failure structures look for nested errors? Say, for instance, that rec-key-opt-rest.json's required key was another //rec schema that the data failed to match in a couple of ways?

Also how do you feel about implementing these errors as Python objects instead of straight dictionaries, and then possibly including a method that converts them to dictionaries of the form that you expect for your tests?

@rjbs
Copy link
Owner

rjbs commented Sep 4, 2015

They should absolutely be thrown as objects. In the Perl code, for example, you get a Data::Rx::Failure object. The spec is describing the properties of the error expected, and absolutely not defining the exact representation. Data::Rx::Failure and Data::Rx::FailureSet do not have a method like the one you describe. Instead, there is a helper library for the tests which does the comparison of a Failure object to a spec entry.

You are of course free to do whichever you prefer. I think I'd advise to follow the Perl code's model. That way, if the test definitions change, only the test code needs updating. Still, I wouldn't expect it to be a big deal.

As for the failure structures looking for nested errors: well, I'm not sure I fully understand your question, and I hope I don't sound too thick or pedantic when I say that of course the failure structures don't need to look for nested errors -- they already have the record of them. I think you're asking one of two things, and I'll try to answer both. When it turns out I'm wrong, maybe you'll know how to set me right.

If you mean "How do you manage to return a structured error when each checker checks one thing"

When you call assert_valid on a complex schema and it can't reject right away (as you could if, say, you're trying to assert that {} is a valid //bool) then it will pass call all its subchecks. Say you required an array of ints, but the 3rd entry is a bool. The //bool checker would throw an exception (a Data::Rx::Failure in Perl, an Rx.ValidationError in Python (maybe? whatever). That exception would propagate up to the //arr checking code, which would catch it and add more data: "this was hit when checking the value in slot 3 against the contents subcheck." In the Perl code that I included above, that's the contextualize method.

So the call stack, going deeper, eventually hits an exception. As the exception handlers run on the way back up, each one contextualizes it more and eventually it goes uncaught back to the caller. In the example above, it would add "3" to the path for the data (for index 3 into the data) and "contents" to the path for the check, because it's the "contents" check of //arr that rejected a bool where an int was expected. If the //arr had been inside another structure, that contextualized exception would be rethrown, recaught, and given another piece of path data.

This not the only way to do this. It might not always be the best way. It's just how Perl's implementation works.

In the example you gave, the top-level //rec checker gets the value in the relevant key and calls the checker for the value. That one throws an exception. The top-level checker contextualizes it, and that's rethrown. The user catches this and can see the whole problem.

If you mean "How does the user look at the exception they receive and figure out what happened?"

Well, there are a few ways. Me, I just print the exception to the screen and it tells me — but of course the implementation of __str__ has to do the hard work, so let's talk about how.

When you get a failure on a deep structure, the thing you get back has all the context to find the data that was rejected and the check that rejected it. That's what got added to it by the contextualization, above. The way that the Perl errors implement the path data is with a pair of arrayrefs (Lists). When we note that we got this exception at (data path 3, check path "contents") we do the equivalent of:

error.context.append({ "data_path": 3, "check_path": "contents" })

That happens a bunch of times as you go up the handler and eventually you end up with a context that is a list of frames, just like an exception's stack trace, except it's tracing the parts of the schema, not the program.

I seem to have explained how the structure is built, more than the actual question, but that's okay, because this information makes the answer clear:

To explain the context of the error, there is a method called data_path that gathers up all the data path entries from the context and returns them as a list in drill-down order. There's another method called check_path.

Stringifying a Data::Rx::Failure is:

  my $str = sprintf "Failed %s: %s (error: %s at %s)",
    $struct->[0]{type},
    $struct->[0]{message},
    $self->error_string,
    $self->data_string;

The type and error there are from the very first bit of context: what was the real error? This will tell us, for example, that we had a "range" error with the message "5 is too big, 2 was the max" when talking about an int found deep in a dict.

Finally, one more elaboration: note that I used error_string and data_string. These format the paths into something like $check->[0]->{foo}->{contents} for easy reading (if you read Perl). It can do that because each bit of context isn't just data_path = 3 but something like data_path = { "type": "index", "value": 3 }, with a type for dict values, subroutine args, and so on.

Double finally, all this is just how the Perl code does it. I think it's fine, but you may have another idea that will work better for you. Let me know!

@CesiumLifeJacket
Copy link
Contributor Author

I am sorry, but neither of those answers quite address the situation I am stuck on. What I meant to ask was how these structured errors work for a multi-error failure where errors exist across multiple levels in the data, in what I picture as a recursive error tree.

As I understand it right now (quite possibly incorrectly), your final error structure would hold all the discovered errors in one denormalized list; if two elements in an array nested deeply within some complicated structure both failed a test, these failures would be represented as two objects in your top-level list of errors, with their data_paths identical except for the last element (the different array indicies).

To my mind it would be more natural to represent the failure structure as a tree, where container errors (map, rec, seq, arr) can act as branch nodes pointing down to the errors in their children. I have to go right now, but I believe there are several advantages to this structure over the mental model of that perl structure I currently have. I'll be back and flesh this out in a couple hours.

@rjbs
Copy link
Owner

rjbs commented Sep 5, 2015

Thanks for clarifying! Let's have a look at what happens. Here's a test program in Perl:

use strict;
use warnings;
use Data::Rx;
use YAML::XS qw(Load);

my $schema_struct = Load("
---
type: //rec
required:
  FOO:
    type: //rec
    required:
      BAR:
        type: //all
        of  :
        - { type: //int, range: { min: 1 } }
        - { type: //int, range: { min: 2 } }
");

my $rx     = Data::Rx->new;
my $schema = $rx->make_schema( $schema_struct );

$schema->assert_valid({ FOO => { BAR => 0 } });

It prints this:

Failed //int: value is outside allowed range (error: range at $data->{FOO}->{BAR})
Failed //int: value is outside allowed range (error: range at $data->{FOO}->{BAR})

So far so good, but you want to know whether we've created a tree that diverges only as necessary or whether we've got a list of paths, some of which have common prefixes. So, let's dump the error object's guts:

bless({
  'failures' => [
      # Here's the first error we get
      bless({
          'struct' => [ {
              'type'    => '//int',
              'message' => 'value is outside allowed range',
              'value'   => 0,
              'error'   => [ 'range' ]
            },
            {
              'check_path' => [ [ 'of', 'key' ], [ 0, 'index' ] ],
              'type' => '//all'
            },
            {
              'data_path' => [ [ 'BAR', 'key' ] ],
              'check_path' => [ [ 'required', 'key' ], [ 'BAR', 'key' ] ],
              'type' => '//rec'
            },
            {
              'type'       => '//rec',
              'check_path' => [ [ 'required', 'key' ], [ 'FOO', 'key' ] ],
              'data_path' => [ [ 'FOO', 'key' ] ] }
          ],
        },
        'Data::Rx::Failure'
      ),
      # Here's the second one
      bless({
          'struct' => [ {
              'error'   => [ 'range' ],
              'value'   => 0,
              'message' => 'value is outside allowed range',
              'type'    => '//int'
            },
            {
              'type'       => '//all',
              'check_path' => [ [ 'of', 'key' ], [ 1, 'index' ] ]
            },
            $VAR1->{'failures'}[0]{'struct'}[2],
            $VAR1->{'failures'}[0]{'struct'}[3]
          ],
        },
        'Data::Rx::Failure'
      ) ]
  },
  'Data::Rx::FailureSet'
);

Notable facts:

  • the FailureSet error we get has two things in its failures entry
  • each one has all of the path information
  • that means that we don't have two error "leaves" beneath the path leading to the //all check
  • …but the path elements are re-used, so the size cost is minimal

Certainly what you describe could be done, and it wouldn't be too hard. It may be better for some applications, if you're going to walk the error object, especially if you walk it in sync with the data structure. The existing implementation may have no particular benefit over it, in fact. I think it's just how we ended up doing it! If I think of some benefit beyond "was easy to implement" I'll add it here. ;)

wanted reference to variable `name`, not the string '`name`'
@CesiumLifeJacket
Copy link
Contributor Author

Back. I think it would be easier for me to implement the tree structure, and personally I think a tree better conceptually represents the nature of this data. Additionally, a tree makes it easy to implement error messages which are less redundant, by grouping all the failures associated with a particular container under a "section" for that container. For example, my current implementation of the error messages does this with your example:

import Rx
import yaml

schema_struct = yaml.load('''
---
type: //rec
required:
  FOO:
    type: //rec
    required:
      BAR:
        type: //all
        of  :
        - { type: //int, range: { min: 1 } }
        - { type: //int, range: { min: 2 } }
''')

rx = Rx.Factory()
schema = rx.make_schema(schema_struct);

schema.validate({'FOO': {'BAR': 0}})

raises an error with the message:

value: FOO: BAR failed to meet all schema requirements:
  BAR must be in range [1, inf)
  BAR must be in range [2, inf)

which, while some details could be improved upon, I think is a generally nicer message.

With the error tree, the check_path and data_path are both implicit in the structure. Each error type can be its own object with its own __str__ method, and container errors can call the __str__ of their child errors to compose a nicely formatted, recursive error message.

@rjbs
Copy link
Owner

rjbs commented Sep 11, 2015

I realize now that maybe you posted the above with the thought that I would reply. Just in case: your plan sounds good. 😉

@CesiumLifeJacket
Copy link
Contributor Author

Update: I've been working on this and am close to done, but school's started up again and its slowing me down a lot. I'll have a commit with the structured errors as soon as I can, hopefully within a week.

@oliverpool
Copy link

Hi,

it seems that this PR has already been merged.

Except the last commit of CesiumLifeJacket, everything is already in master!
Could you maybe merge or close this issue? (to prevent people like me from thinking that the master branch is not Python 3 ready yet)

oliverpool referenced this pull request in patacrep/patacrep Dec 17, 2015
@rjbs
Copy link
Owner

rjbs commented Dec 18, 2015

Just applied that last commit as e0368c9.

Because this ticket acquired another purpose, I'm leaving it open, but I'll change the title to avoid the apeparance that we're not py3 ready. Thanks for the nudge!

@rjbs rjbs changed the title Python 3 compatibility and error messages overhaul python error reporting for structured failure Dec 18, 2015
@oliverpool
Copy link

👍

@CesiumLifeJacket I look at your proposition for the error management and I like it (especially having the full path to the key that raised the error(s) ).

@rjbs I think Rx is a great tool, it could be more widely used if you split the repository : one main with the documentation and several reposititories with the different implementations (with an official pip package for python for instance). It will also help to follow the forks of this project (currently the forks are not marked as such, because the people just create a new repository with the implementation file they are interested in)

@CesiumLifeJacket
Copy link
Contributor Author

Thanks for commenting Oliverpool, you made me actually follow up on this. I've updated my version of Rx.py with a rough draft of the structured error classes. The schemas that can act like branches now raise SchemaTreeMismatches, which can store errors in two attributes: errors and child_errors. child_errors stores the SchemaMismatch errors of some data structure's children; it's used by seq, arr, map, and the like. A key in the child_errors dictionary is the key in the data that indexes that problem child. The errors attribute is a list of errors which can't be associated with children in the data; it's used by //any and //all, and for issues where, for instance, an //arr is outside the expected length range. Here's an example of the error messages this produces:

>>> import Rx
>>> f = Rx.Factory()
>>> import yaml
>>> s = f.make_schema(yaml.load('''
... type: //seq
... contents:
... - //int
... - //nil
... - type: //rec
...   required:
...     foo: //int
...     bar: //int
...   optional:
...     baz:
...       type    : //arr
...       contents: //int
...       length:
...         min: 1
...         max: 3
... '''))
>>> s.validate([1, None, {'foo': 1, 'baz': [3, 4, 5, 6.2, 7], 'bar': 2}])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/jeremy/Documents/repos/rx/python/Rx.py", line 608, in validate
    raise mismatch;
Rx.SchemaTreeMismatch: [2] ['baz'] does not match schema requirements:
  length must be in range [1, 3]
  [3] must be an integer

SchemaMismatch exceptions also have an attribute schema_type which stores the type of schema which raised that exception, e.g. //str, //arr, etc.

There are some things about this system I'm not in love with and am already thinking about changing, but if you have any feedback about the direction I'm taking this before I get in too deep, it would be appreciated.

@oliverpool
Copy link

I really like it! 👍

To emphasis the tree structure, maybe you could print >:
Rx.SchemaTreeMismatch: [item 2] > 'baz' does not match schema requirements: (add the item word for lists and remove the use of brackets for dicts)

The errors could also report the values that where found:

  length: 5
    must be in range [1, 3]
  [item 3]: 6.2
    must be an integer

but I think the formulation can be improved

@CesiumLifeJacket
Copy link
Contributor Author

Thanks for the feedback. What do you mean by formulation? Can you be more specific?

One thing I was thinking about changing was to make all the Mismatch exceptions more like the SchemaTreeMismatch, in that they store data with the details of the error that occurred, and then turn that data into an error message in an overloaded __str__() method, instead of the mixture of error messages being generated in the exception classes and in the validate() methods we have currently.

@oliverpool
Copy link

but I think the formulation can be improved

is aimed at my proposal: the words and sentences that I propose could be improved.

Your "data storage proposition" sounds great to me: it allows on the developer side to have a direct insight of the error if needed.

@CesiumLifeJacket
Copy link
Contributor Author

Latest commit is a rework of the way Mismatch exceptions are designed, with your proposed changes implemented. Still probably needs a little polishing but I'm much happier with the overall structure now.

That same code now produces an error message like this:

Rx.TreeMismatch: [item 2] > 'baz' does not match schema:
  length must be in range [1, 3] (was 5)
  [item 3] must be of type int (was float)

@CesiumLifeJacket
Copy link
Contributor Author

While I'm at it, another thing I've been thinking about is the Factory class. What utility does this class provide that couldn't be gotten from simply making the Rx module behave like an instance of Factory? If, for example, instead of writing

import Rx

rx = Rx.Factory()
schema = rx.make_schema(...

you would just write

import Rx
schema = Rx.make_schema(...

I feel like the vast majority of the time, the ladder would serve just fine, and saves a line of code. If there is some other functionality this eliminates, would it be possible to add that functionality back in as the exception, instead of the rule? I think this change would make using Rx for the first time a little more approachable.

@oliverpool
Copy link

I'm totally satisfied with the error message that is now displayed (the was ... is exactly what I needed).

making the Rx module behave like an instance of Factory

I agree with this proposal!

@CesiumLifeJacket
Copy link
Contributor Author

The no-factory branch of my fork of the repository makes that change, but I'd like to hear Ricardo's opinion on this, because it feels like I'm removing some functionality that I don't quite appreciate.

Also, minor stylistic decision: The Type classes are ordered alphabetically, so should the Mismatch exceptions be ordered this way as well? Right now they're kind of thematically arranged, with similar exceptions grouped together and generally increasing in complexity as you go down. I like the idea of alphabetical ordering, but it would separate some very similar exception classes, such as MissingFieldMismatch and UnknownFieldMismatch, which I don't like so much. Thoughts?

@oliverpool
Copy link

minor stylistic decision: The Type classes are ordered alphabetically...

Personally, I either read the complete code linearly (so grouping similar exceptions makes sense) or when I search for something I use my editor tools (so I don't care about the ordering). Apart from that, I can't tell...

@rjbs
Copy link
Owner

rjbs commented Jan 3, 2016

Thanks for your patience during the holidays while I ignored most of my repositories!

I am opposed to removing the factory. The factory provides a useful layer of indirection so you can do things like provide your own implementation of core types on a per-Factory basis. Your patch makes this impossible because it unconditionally registers the core types. To allow different sets of checkers to exist in one process, it needs to be possible to have two factories.

Put another way: removing the factory makes Rx configuration global and less flexible.

@rjbs
Copy link
Owner

rjbs commented Jan 3, 2016

It's nice to see this getting some love! I agree with your suggestion: better to have more data-rich errors than to package it all up into human-readable strings.

Also, note that the Rx test suite wants keywords on errors. These should be easy to add with that change, though!

@oliverpool
Copy link

removing the factory makes Rx configuration global and less flexible

I think I understand your point. Instead of removing the Factory, maybe a class method could be added to have a "one liner":

import Rx
schema = Factory.standard_schema(yaml_like_object)

I think the default core types are enough for the majority of people: it can be good to simplify their first contact with Rx !

@rjbs
Copy link
Owner

rjbs commented Jan 4, 2016

I'm not sure I 100% follow your example, but I think what you're suggesting is the same thing I was going
to suggest, which came to me in the shower. It seems like a good obvious idea:

  import Rx
  schema = Rx.make_schema(...)

...where that method would be something like:

  std_factory = None
  def make_schema(self, schema):
    global std_factory
    if std_factory is None: std_factory = Factory()
    return std_factory.make_schema(schema)

Right?

@CesiumLifeJacket
Copy link
Contributor Author

Right!

Latest commit is a file with the old Factory back, but also that Rx.make_schema() method, and SchemaMismatch now has an error attribute, which is a string like 'type' or 'range', etc. I'm not sure the error names I put in match those expected by the tests 100%.

The next step seems to me to update rx-test.py to validate the structure of the errors, instead of just whether or not an error is being thrown. When those tests get implemented I should know if I named all the errors correctly.

@rjbs
Copy link
Owner

rjbs commented Jan 25, 2020

I am back after years in the wilderness, and wonder whether this PR should be closed, left open, or other.

If nothing else, maybe I am finally ready to split the repo, as it's often asked for…

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants