Skip to content

Conversation

@nparlante
Copy link
Contributor

@cpennington Could you check this one out?
@rocha Note the analytics issue below
@jinpa How about you do testing, it is your story I believe!

Just FYI: @jaericson @shnayder

Adds studio and lms support for an option to shuffle
the displayed order of multiple choice questions.
Works by shuffling the xml tree nodes in the problem
during the get_html process. The shuffling uses the problem's seed.
The added syntax for the xml and markdown is documented in the Changelog.

One concern was: will this mess up analytics?
My experiments suggest that analytics will work fine:

The question is -- when the question choices are displayed shuffled, is enough logged so that analytics still works?

I have a 4-option problem with shuffling enabled. In this case, the options are presented in the order b a d c (i.e. 1 0 3 2). I selected option c, the last one displayed (which happens to be correct), and then looking in courseware_studentmodulehistory I have the following row

2013-10-22 22:23:40.658495|1.0|1.0|{"correct_map": {"i4x-Stanford-CS99-problem-408c0fcffb8c41ec87675d8c3f7a3b5b_2_1": {"hint": "", "hintmode": null, "correctness": "correct", "npoints": null, "msg": "", "queuestate": null}}, "input_state": {"i4x-Stanford-CS99-problem-408c0fcffb8c41ec87675d8c3f7a3b5b_2_1": {}}, "attempts": 2, "seed": 282, "done": true, "student_answers": {"i4x-Stanford-CS99-problem-408c0fcffb8c41ec87675d8c3f7a3b5b_2_1": "choice_2"}}||4|52

In student_answers I see choice_2 which is "c" in the 0-based numbering, so that's right. It looks to me that it has successfully recorded which choice I made, not getting confused by the fact that the options where displayed in a different order. The rest of the stuff in the rows looks reasonable, but mostly it's greek to me.

Because we're using standard python shuffle, knowing the seed, you can recreate the shuffle order. Maybe you never need to do this since the problem will just do it for you. Still. Noting that the logged seed above is 282:

r = random.Random(282)
a = [0, 1, 2, 3]
r.shuffle(a)
a
[1, 0, 3, 2]

Cute!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pushes fixed islands to the end. I think that's reasonable behavior, but you don't actually test in in the set of tests.

@cpennington
Copy link
Contributor

Other than the one missing test case, looks good to me. @nedbat: am I missing anything?

@nedbat
Copy link
Contributor

nedbat commented Oct 25, 2013

I don't see a test that confirms that different seeds produce different results.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to write out this if/else over four lines. The flow is hard to see here. Also, why not if (shuffle), why the inverted test?

I notice you dealt with the fixed flag and the shuffle flag differently, the code might be easier to follow if the same idiom were used.

@nparlante
Copy link
Contributor Author

Thanks for the quick responses Cale and Ned. I've made the following fixes:

-I upgraded the rare "island" goes-to-tail case just to be the official behavior and tested it, although obviously it's kind of an oddball case. Leaving it undefined (K&R style sortof) the way I had was probably not ideal. I expanded the changelog doc to document this all.

-I cleaned up the .coffee js if/else, and also I noticed that file write the ifs with no space before the (, so I did that too.

-Added test for a different seed

I just kept the one commit and used --force to put it back up. I'll ping @rocha to make sure he's ok with the analytics end. I think Jane is traveling today, so I expect she won't do the testing until next week.

@rocha
Copy link
Contributor

rocha commented Oct 28, 2013

@nparlante It will be ideal if we can include any indication that the question was shuffled or not.

IMHO we still have to figure a better way to deal with randomization in problems, since relying that all python implementations will produce the same random results with the same seed seems too fragile to me. The issues is larger than this Pull Request however.

@nparlante
Copy link
Contributor Author

@rocha -- hmm, I can confirm that rng changes (code below), at least sometimes, across releases -- d'oh!
How about this: I record the shuffle order e.g. "[0, 2, 1, 3]", so then the log is easy to interpret, and more importantly, if we change the rng scheme in the future for whatever reason, everything still works.

Could you point me to what line makes that log entry per student submit?

-- code to see if random num gen changes across releases ... it does!
import random
r = random.Random(0)
a = [0, 1, 2, 3]
r.shuffle(a)
a

3.2: [2, 0, 1, 3]
2.7: [1, 0, 2, 3]

@cpennington
Copy link
Contributor

I think that particular wrinkle is something we don't take into account in much/most of our analytics logs at the moment. In particular, I think we depend on using the seed to regenerate problems elsewhere in capa randomization too.

@nparlante
Copy link
Contributor Author

Hmm, digging around in the python docs, the following section appears in 3.2.x but not in the 2.x series. It looks to me that this gives the reproducibility feature we want ... too bad it's a future version we're not using! It does mean that we can take the pain one time at 2.x to 3.x conversion time, then at least it will be solved and we can forget about it.

http://docs.python.org/3.2/library/random.html#notes-on-reproducibility

8.6.1. Notes on Reproducibility
Sometimes it is useful to be able to reproduce the sequences given by a pseudo random number generator. By re-using a seed value, the same sequence should be reproducible from run to run as long as multiple threads are not running.
Most of the random module’s algorithms and seeding functions are subject to change across Python versions, but two aspects are guaranteed not to change:

If a new seeding method is added, then a backward compatible seeder will be offered.
The generator’s random() method will continue to produce the same sequence when the compatible seeder is given the same seed.

@nparlante
Copy link
Contributor Author

So I've come to the view that the correct course here is to leave the logging as it is, for these reasons:

-The most important bit -- their answer and its correctness -- is logged

-I believe the current paradigm of the logging is that if you want to know something about the question, you load the question object and ask it directly. That all still works. Looking at the example log at the top of this thread ... we don't even log the number of options. The analytics code most load the question to orient itself about all that, which seems fine, as we don't want duplicate that data in the logs. If someone wants to do analytics about shuffle order affecting correctness, they will have to load the question and interrogate it, just like now.

-There will be a problem doing analytics in a python 3.x world using data gathered in a 2.x world, but that's seems pretty minor as 3.x problems go.

@rocha
Copy link
Contributor

rocha commented Oct 29, 2013

@nparlante thanks for the research. I think your arguments are valid.

One more question, why is the shuffling done at LongcapaProblem level, and not at MultipleChoiceResponse? Seems to that the code as it is not that reusable to other problem types since their trees do not necessarily have answer options.

Moving it to MultipleChoiseResponse could also help when adding something like a is_shuffled flag to the module state, which is what is captured by the event tracking logs.

@nparlante
Copy link
Contributor Author

Hmm, I'm not sure what you mean about |MultipleChoiceResponse|. Heh,
really my architectural background for this feature is based on dropping
into pdb and observing what code is rendering whatever I just built in
studio.

Here's what the xml looks like now:

         <problem>
         <multiplechoiceresponse>
           <choicegroup type="MultipleChoice" shuffle="true">
             <choice correct="false">Apple</choice>
             <choice correct="false">Banana</choice>
             <choice correct="false">Chocolate</choice>
             <choice correct ="true">Donut</choice>
           </choicegroup>
         </multiplechoiceresponse>
         </problem>

Do you mean to put the shuffle= up in multiplechoiceresponse, or do you
mean the implementation code might fit in a better way in
capa_problem.py or somewhere else?

On 10/29/13 12:15 PM, Carlos Andrés Rocha wrote:

@nparlante https://github.com/nparlante thanks for the research. I
think your arguments are valid.

One more question, why is the shuffling done at |LongcapaProblem|
level, and not at |MultipleChoiceResponse|? Seems to that the code as
it is not that reusable to other problem types since their trees do
not necessarily have answer options.

Moving it to |MultipleChoiseResponse| could also help when adding
something like a |is_shuffled| flag to the module state, which is what
is captured by the event tracking logs.


Reply to this email directly or view it on GitHub
https://github.com/edx/edx-platform/pull/1499#issuecomment-27334185.

@rocha
Copy link
Contributor

rocha commented Oct 29, 2013

@nparlante I meant the implementation. Sorry I was not clear.

MultipleChoiceResponse is a class inside capa/responsetypes.py. However, now that I see your XML example, it looks to me that the implementation is better suited inside the ChoiceGroup class in capa/inputtypes.py. The class also does some parsing for you if you use 'Attributes'.

We won't still have the emitted events indicate that the question was shuffled, but as you said earlier, that is a problem that affect other problem types as well.

@cpennington
Copy link
Contributor

Good catch, @rocha. I agree, this randomization should be part of ChoiceGroup, rather than being run inside capa_problem.py.

@nparlante
Copy link
Contributor Author

Sure, that file looks reasonable. Heh, it includes the following TODO
which is kind of suggestive
# TODO: handle direction and randomize

I could override init and shuffle the tree in there somewhere, or I
could override render_html, seemingly the closest analog to the previous
get_html implementation, and shuffle the tree before the rendering. My
inclination is to do it in init so the shuffling happens exactly
once, and you can render as many times as you like.

@rocha .. what do you think about where the shuffling should be done?

On 10/29/13 1:36 PM, Calen Pennington wrote:

Good catch, @rocha https://github.com/rocha. I agree, this
randomization should be part of |ChoiceGroup|, rather than being run
inside |capa_problem.py|.


Reply to this email directly or view it on GitHub
https://github.com/edx/edx-platform/pull/1499#issuecomment-27340913.

@cpennington
Copy link
Contributor

In general, we're only likely to be rendering once per request anyway, so I would lean towards putting it into get_html. A push in the other direction might be that we don't want to leak the pre-randomized order of the choices through the ids, in which case the randomization should be of the choices up front, before we assign ids to them at all.

@rocha
Copy link
Contributor

rocha commented Oct 30, 2013

@nparlante what @cpennington said :)

Also, there is no need to add an additional shuffle attribute. Instead we should handle randomize correctly.

@nparlante
Copy link
Contributor Author

Ok, get_html it is!

@rocha for the randomization setting, , my understanding is that the randomize setting controls when the seed is set and reset, and essentially the seed feature is independent of the problem type.

The shuffling is one sort of "randomization" which is particular to multiple choice problems, using the seed. Heh, I happen to know we're going to have other types or "randomization" since Jeff Ericson @jaericson, sitting across from me is working on another type randomization which I think will turn out to be orthogonal to shuffle.

So anyway I think that's why it's right to have an independent "shuffle" attribute.

@shnayder
Copy link

+1 on having separate shuffle attribute--it is a separate use case. (randomize will control whether it's re-shuffled every time or not)

@rocha
Copy link
Contributor

rocha commented Oct 30, 2013

@nparlante @shnayder makes sense!

@nparlante
Copy link
Contributor Author

Great -- I was sort of 80% confident about the randomize thing, so it's good to hear that that makes sense. I think we've got a good consensus on what I need to do, so I'll ping you all when I get it moved over on this same PR.

It seems like it's not too much work, but ahem ... well you never know!

@nparlante
Copy link
Contributor Author

So I put the following code in to verify that render_html does not get called multiple times...
if hasattr(self, 'render_called'):
print 'This never happens!'
raise AssertionError("render_html is not supposed be called multiple times")
self.render_called = True

And as you can guess, it turns out that assumption is false (I just ran test_lms which was the laziest thing I could think of). It's not like Cale was giving me some sort of gold plated guarantee on that prediction, but in the light of day, assuming that something never gets called twice was not the most reliable path. It's easy for stuff to get called and there's a lot of code paths, so it's just going to creep in, unless there's some sort of assert formally keeping it from happening.

So anyway, it seems like is that the shuffle code should go in init or something in that neighborhood, so that's what I'm working on now.

@nparlante
Copy link
Contributor Author

Ok, it's basically the same code, but now housed in responsetypes.py

One difference is that here it was easy to do the shuffle early, so that the choice_0 ... numbering uses the shuffled version. As Calen points out above, this prevents us from leaking any info about what order the author chose, which seems desirable -- like the author can just write the choices in whatever order they like, and we have them covered.

This means that analytics will need to create the problem/seed to decode what choice_0 means, since it doesn't even mean that it's the first choice in the authored-ordering any more.

@rocha
Copy link
Contributor

rocha commented Nov 4, 2013

@nparlante one of the most commonly used analysis is answer distribution. How will the shuffling affect such analysis? For example, normally instructors want to know how many people answered option a, how many option b, etc.

@nparlante
Copy link
Contributor Author

The current style in the capa space that I've seen is that the problem
is the store of info about the problem (this predates shuffle). The log
just records what the student did. A good example of this style is that
the log does not even record the number of choices. You have to go to
the problem if you want to know how many choices there were. So anyway,
the shuffle works the same way. You've got to load the problem to decode
what choice_0 is. I assume that's the the current analytics code does ..
it's the only way to get the text of the choices or whatever.

Cheers,

Nick

On 11/4/13 10:03 AM, Carlos Andrés Rocha wrote:

@nparlante https://github.com/nparlante one of the most commonly
used analysis is answer distribution. How will the shuffling affect
such analysis? For example, normally instructors want to know how many
people answered option a, how many option b, etc.


Reply to this email directly or view it on GitHub
https://github.com/edx/edx-platform/pull/1499#issuecomment-27707535.

@rocha
Copy link
Contributor

rocha commented Nov 4, 2013

@nparlante i agree that for most capa problems, it is necesary to check the original problem to know what it means.
However, in the case of multiple choice questions that has not been the case in the pass. It is very common for instructors/researchers to threat this problems as you will use a "survey". Basically it is very easy (and it is very commonly done right now) to count how many users answered a, or b, or c, without checking the original problem.

If I understand correctly, with this patch it would be absolutely necessary to use the original problem and the seed to be able to recreate the distribution of answers, correct?

@nparlante
Copy link
Contributor Author

On 11/4/13 1:39 PM, Carlos Andrés Rocha wrote:

....
If I understand correctly, with this patch it would be absolutely
necessary to use the original problem and the seed to be able to
recreate the distribution of answers, correct?

That's correct: the raw choice_0 in the log stream will mean different
things depending on the seed, so you can't just count them in their raw
form. It would be easy enough to put a little permutation hint, e.g. [1,
0, 3, 2], in the log stream, although that would muddy the clarity of
the design -- currently the log is just what the student did, and
knowledge of the problem is in the problem. The total amount of info
about the problem is rather large (i.e. you wouldn't want to log it all
again and again), so ultimately, the analytics side must be comfortable
getting info from the real problem.

My guess is that we should leave the log format as it is. The analytics
side could have a get_native_choice() function that takes the logged
choice and the problem and gives you the "native" choice_0 choice_1.
Just build the function once and analytics can use it forever, so this
doesn't seem like a big cost. It could be a function on the problem
object itself, so you just construct the problem object and let it
translate for you. Or it could return the above [1, 0 , 3, 2] hint, so
the caller can see how the problem was laid out. That sort of code is
trivial to implement inside the problem.

Reply to this email directly or view it on GitHub
https://github.com/edx/edx-platform/pull/1499#issuecomment-27725152.

@jinpa
Copy link
Contributor

jinpa commented Nov 4, 2013

My sense is that some people working with the data may be working with the
logfiles, and getting what they can from those, without using the models.
Can you go over what the downside would be to also logging the "native"
choice, so that someone could do an aggregation of how many students chose
which option without needing to use a get_native_choice() function? Is the
issue that the extra field might break someone's log parser?

On Mon, Nov 4, 2013 at 2:56 PM, Nick Parlante notifications@github.comwrote:

On 11/4/13 1:39 PM, Carlos Andrés Rocha wrote:

....

If I understand correctly, with this patch it would be absolutely
necessary to use the original problem and the seed to be able to
recreate the distribution of answers, correct?

That's correct: the raw choice_0 in the log stream will mean different
things depending on the seed, so you can't just count them in their raw
form. It would be easy enough to put a little permutation hint, e.g. [1,
0, 3, 2], in the log stream, although that would muddy the clarity of
the design -- currently the log is just what the student did, and
knowledge of the problem is in the problem. The total amount of info
about the problem is rather large (i.e. you wouldn't want to log it all
again and again), so ultimately, the analytics side must be comfortable
getting info from the real problem.

My guess is that we should leave the log format as it is. The analytics
side could have a get_native_choice() function that takes the logged
choice and the problem and gives you the "native" choice_0 choice_1.
Just build the function once and analytics can use it forever, so this
doesn't seem like a big cost. It could be a function on the problem
object itself, so you just construct the problem object and let it
translate for you. Or it could return the above [1, 0 , 3, 2] hint, so
the caller can see how the problem was laid out. That sort of code is
trivial to implement inside the problem.

Reply to this email directly or view it on GitHub
https://github.com/edx/edx-platform/pull/1499#issuecomment-27725152.


Reply to this email directly or view it on GitHubhttps://github.com/edx/edx-platform/pull/1499#issuecomment-27731528
.

@nparlante
Copy link
Contributor Author

I would say that log-only analysis is not formally supported right now,
but it happens to work. The crop of problem-randomizing features coming
down the pike will certainly break this sort of analysis. I think the
problem objects are just going to get steadily bigger and more
complicated, which will enable better teaching, AND necessarily will
make analytics more complicated.

If we want to support log-only analytics, then I'll say there are two paths:

(1) The easy way is that we record minimally what happened, but do not
record the full state of the problem. E.g. we can record [1, 0, 2, 3]
BUT we don't record that the 3 had the rarely used @ option set, or that
it was an "all of the above" choice, or began with uppercase letters, or
any other of the countless little details that make up a problem.
Essentially, the person doing the analysis is responsible for accessing
all of the problems out of band and knowing what state they are in. It
sounds like this is actually the current practice.

A variation of (1) I just thought of is we could log always using the
native numbering .. that's pretty darn simple and resembles the current
logs exactly. The cost is it really makes debugging a headache, since
the user submits choice_3 but it gets logged as choice_0. Recording the
[1, 0, 2, 3] avoid that problem.

(2) Record all of the state/setting of the problem in the log, which
could be rather a lot, but does mean the analytics side does not need to
access the problem object. I think this is the wrong approach. The
problem object already stores everything there is to know about a
problem and will be maintained to do so in the future as more features
are added. So if you want to know all of the state of a problem... get
yourself a problem object.

I think the costs of the (1) approaches above are pretty low, so I'm
fine doing that if we decided we want to support the log-only case.

On 11/4/13 3:02 PM, jinpa wrote:

My sense is that some people working with the data may be working with
the
logfiles, and getting what they can from those, without using the models.
Can you go over what the downside would be to also logging the "native"
choice, so that someone could do an aggregation of how many students
chose
which option without needing to use a get_native_choice() function? Is
the
issue that the extra field might break someone's log parser?

On Mon, Nov 4, 2013 at 2:56 PM, Nick Parlante
notifications@github.comwrote:

On 11/4/13 1:39 PM, Carlos Andrés Rocha wrote:

....

If I understand correctly, with this patch it would be absolutely
necessary to use the original problem and the seed to be able to
recreate the distribution of answers, correct?

That's correct: the raw choice_0 in the log stream will mean different
things depending on the seed, so you can't just count them in their raw
form. It would be easy enough to put a little permutation hint, e.g.
[1,
0, 3, 2], in the log stream, although that would muddy the clarity of
the design -- currently the log is just what the student did, and
knowledge of the problem is in the problem. The total amount of info
about the problem is rather large (i.e. you wouldn't want to log it all
again and again), so ultimately, the analytics side must be comfortable
getting info from the real problem.

My guess is that we should leave the log format as it is. The analytics
side could have a get_native_choice() function that takes the logged
choice and the problem and gives you the "native" choice_0 choice_1.
Just build the function once and analytics can use it forever, so this
doesn't seem like a big cost. It could be a function on the problem
object itself, so you just construct the problem object and let it
translate for you. Or it could return the above [1, 0 , 3, 2] hint, so
the caller can see how the problem was laid out. That sort of code is
trivial to implement inside the problem.

Reply to this email directly or view it on GitHub
https://github.com/edx/edx-platform/pull/1499#issuecomment-27725152.


Reply to this email directly or view it on
GitHubhttps://github.com/edx/edx-platform/pull/1499#issuecomment-27731528
.


Reply to this email directly or view it on GitHub
https://github.com/edx/edx-platform/pull/1499#issuecomment-27731999.

@nparlante
Copy link
Contributor Author

@sarina @cahrens But seriously, thanks so much for all your help, and especially this timely surge at the end to get this thing across the finish line. I'm sure I could have wrangled my end of it faster, but I also have a sense that this is really kind of a complicated feature, so tons of review was maybe ultimately the right thing. I'm thinking of the masking ... jeez that spans a bunch of conceptual levels.

@nparlante
Copy link
Contributor Author

@rocha @mulby Hey Carlos and Gabe -- this should be ready to go for any last testing. My hope is to get it in the next .org release. It would be great actually to see some testing of the analytics data downstream. I've tracked it through capa_base basically, and have a few tests there, but I don't have much visibility of what happens after track_function() gets called.

You can add shuffle="true" in a multiple choice question, and the masking will be active. Things to look for downstream:

  1. "answers" is translated
  2. "student_answers" is translated
  3. a "permutation" key is added that details if it was shuffle or answer-pool, and shows the full as-displayed list. It's easy to imagine doing neat research with that data -- first position vs. last and whatnot.

Obviously you should never see mask_xx anywhere. I suppose you could just grep for it!

@rocha
Copy link
Contributor

rocha commented Mar 21, 2014

Hi @nparlante. Thanks for such amazing work! We are starting the release of answer distributions computed through our pipeline next week. Will you be ok if to wait a little bit more? before merging this PR? Our plan is to run a test with the new code, but we won't be able to do so until next week.

@nparlante
Copy link
Contributor Author

@rocha Hey Carlos, thanks I think that should be fine. We don't want to slip too much of course, but what you describe sounds fine. And of course getting some thorough testing sounds great.

@brianhw
Copy link
Contributor

brianhw commented Mar 28, 2014

We ran a test of this through the answer distribution pipeline, and it looks good.

@brianhw
Copy link
Contributor

brianhw commented Mar 28, 2014

@nparlante It looks like there are still many suggestions from Sarina that haven't been addressed.

But looking at the output of "problem_check" events emitted by Capa, the right values are output and they do work with our answer distributions. I tested both shuffle and answer-pool options.

As a sanity check, I also looked at other events generated by my testing. As might be expected, browser and implicit events will contain the "mask_N" values, as do the state that is stored in courseware_studentmodule.

Looking at save_problem_success events, the event/state/student_answers/input_id value is unmasked (e.g. "choice_X" instead of "mask_Y"). However, for reset_problem, the event/old_state/student_answers/input_id value is still masked ("mask_Y" instead of "choice_X"). (There is, of course, no new state, since it has been reset.)

I'm not able to look at the values for state that are stored in problem_check events because I had to perform a "reset" before I could do a "check" again.

@sarina
Copy link
Contributor

sarina commented Mar 28, 2014

@brianhw I'm satisfied my comments are addressed.

@nparlante before merging, please make sure you're at 100% quality: https://jenkins.testeng.edx.org/job/edx-platform-report/3716/Diff_Quality_Report/? (theres some lingering pylints)

Also please squash/reword your commits for a clean history (no "WIP", no "Respond to review" commits - the number of commits you end up with is really just your own judgement call)

@brianhw
Copy link
Contributor

brianhw commented Mar 31, 2014

@nparlante Please also fix the reset_problem so that it outputs unmasked data.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you had "num_minute" and "num_second" below, this should be "num_hour".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, done.

@brianhw
Copy link
Contributor

brianhw commented Apr 5, 2014

@nparlante Thanks for fixing the reset, and responding to my comments. So just have to address Sarina's last comment, and get the tests to pass, quality improved, and commits squashed. 👍

@nparlante
Copy link
Contributor Author

@brianhw Thanks Brian. In really appreciated that you figured out the old_state case, as I was surely never going to think of that. I'll work on this other stuff.

@nparlante
Copy link
Contributor Author

Ok, I'm working on the last few bits. I'm hoping the fast-forward clears up the failing tests. Woah, what's up with that huge list of commits. I must have done the rebase in some suboptimal way.

@nparlante
Copy link
Contributor Author

Hmm, ok I'm going to re-do the rebase. (later) ok done. I'm hoping the spuriously failing tests will have fixed themselves. (even later)Yay, that worked for the tests.

@nparlante
Copy link
Contributor Author

@sarina Hey Sarina, ok I think I've done ever thing, so I'm getting ready to click the old button, say late Mon. (Well I need to do one last rebase) There's one remaining pylint which is that the check_problem function is long, which it is. i was thinking it was best to just leave that pylint as is, as a sort of reminder to apply decomposition when messing with that function. What do you think?

@sarina
Copy link
Contributor

sarina commented Apr 14, 2014

You can wrap the function in disable pragmas:

# pylint: disable=too-many-statements
def check_problem():
    really_long_stuff
# pylint: enable=too-many-statements

@nparlante
Copy link
Contributor Author

@sarina ok thank Sarina. I'll give it a last lookover tonight and ship it.

Features by Jeff Ericson and Nick Parlante squashed
down to one commit.

-shuffle of choices within a choicegroup (Nick)
-answer-pool subsetting within a choicegroup (Jeff)
-masking of choice names within a choicegroup, (Nick)
 used by shuffle and answer-pool
-targeted-feedback within a multiplechoiceresponse (Jeff)
-delay-between-submissions capa feature (Jeff)
nparlante added a commit that referenced this pull request Apr 15, 2014
Shuffle feature for multiple choice questions
@nparlante nparlante merged commit f1eefc1 into master Apr 15, 2014
@jbau
Copy link

jbau commented Apr 15, 2014

(woot)^1000 !

@nparlante
Copy link
Contributor Author

@jbau Thanks Jason -- I was torn between thinking up some witty epitaph for good ol' 1499, and the fear how that might jinx it.

@tusbar
Copy link
Contributor

tusbar commented Apr 15, 2014

🌟 👍 🌟

@sarina
Copy link
Contributor

sarina commented Apr 15, 2014

w00t ✨ 🚀

be sure to delete your branch, too, once you've merged (it speeds up builds and deploy)

@rocha
Copy link
Contributor

rocha commented Apr 16, 2014

Nice, congrats! :shipit:

@sarina sarina deleted the nick/shuffle-question branch April 18, 2014 17:56
@sarina sarina mentioned this pull request Aug 15, 2014
xirdneh pushed a commit to open-craft/openedx-platform that referenced this pull request Jun 12, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

open-source-contribution PR author is not from Axim or 2U

Projects

None yet

Development

Successfully merging this pull request may close these issues.