Iris merge requestshttps://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests2023-02-14T17:45:21Zhttps://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests/554Add persistent points-to predicate to Iris2023-02-14T17:45:21ZSimon Friis VindumAdd persistent points-to predicate to IrisThis is the same as !493 but based on !486. The overall description in !493 still applies.
### Notation
As in the previous MR I've used the notation `l ↦□ v` for the new persistent points-to predicate and changed the notation used by `...This is the same as !493 but based on !486. The overall description in !493 still applies.
### Notation
As in the previous MR I've used the notation `l ↦□ v` for the new persistent points-to predicate and changed the notation used by `gen_inv_heap`. In the original MR description [I argued for this choice](https://gitlab.mpi-sws.org/iris/iris/-/merge_requests/493#notation) and @jung [replied with some additional arguments](https://gitlab.mpi-sws.org/iris/iris/-/merge_requests/493#note_55547) to consider. I think the conclusion is that using `l ↦□ v` for the persistent points-to predicate is a good idea but that we need to come up with some other notation for `gen_inv_heap` than what currently is in this MR.Iris 3.4https://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests/570add big_sepS_elem_of_acc_impl2021-01-26T14:32:28ZRalf Jungjung@mpi-sws.orgadd big_sepS_elem_of_acc_implThis lemma encapsulates the useful pattern of getting access to a single element in the big-op, and at the same time changing the big-op statement but in a way that is trivial to adjust for all the other elements.
We should probably hav...This lemma encapsulates the useful pattern of getting access to a single element in the big-op, and at the same time changing the big-op statement but in a way that is trivial to adjust for all the other elements.
We should probably have this as well for the other big-ops, but I think I will need some help for the "2" versions... this one was already kind of nasty to figure out.^^Iris 3.4https://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests/572Rename `mnat`/`mnat_auth` into `mono_nat`.2020-12-06T10:07:01ZRobbert KrebbersRename `mnat`/`mnat_auth` into `mono_nat`.- This avoids confusion between `mnat` and `max_nat`. The `m` stands for `mono`.
- With `_mono` added, the `_auth` suffix in the algebra name no longer makes
sense, so I removed it.
- This makes the names between the logic and the alge...- This avoids confusion between `mnat` and `max_nat`. The `m` stands for `mono`.
- With `_mono` added, the `_auth` suffix in the algebra name no longer makes
sense, so I removed it.
- This makes the names between the logic and the algebra-level liobrary consistent.
- I also renamed `..._frag` into `_lb` in the algebra-level library so as to
make it consistent with the logic-level library.
Furthermore make the order of lemmas consistent and make the versions for the
fractions consistent.
/cc @tchajedIris 3.4https://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests/587Make `wp_apply` perform `wp_pure` in small steps until the lemma matches the ...2022-07-07T11:28:34ZRobbert KrebbersMake `wp_apply` perform `wp_pure` in small steps until the lemma matches the goal.Currently `wp_apply` first performs `wp_pures`, which might cause the goal to be normalized too far.
We experienced this issue in Actris, see https://gitlab.mpi-sws.org/iris/actris/-/blob/master/theories/examples/basics.v#L358, where we...Currently `wp_apply` first performs `wp_pures`, which might cause the goal to be normalized too far.
We experienced this issue in Actris, see https://gitlab.mpi-sws.org/iris/actris/-/blob/master/theories/examples/basics.v#L358, where we had to carefully call `wp_pure` for a certain number of times, and then `iApply` the lemma.
This MR fixes this issue by letting adding a new variant `wp_smart_apply` that performs `wp_pure` step-by-step. The old `wp_apply` now performs no reduction whatsoever.Iris 3.4https://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests/596Remove `CopyDestruct` heuristic2020-12-06T09:45:15ZRobbert KrebbersRemove `CopyDestruct` heuristicThe CI showed that !591 caused a lot of breakage in the reverse dependencies.
While debugging this breakage, I noticed that the heuristic in `iDestruct (H ...) as ..` which decides when to remove/keep the original hypothesis `H` was bro...The CI showed that !591 caused a lot of breakage in the reverse dependencies.
While debugging this breakage, I noticed that the heuristic in `iDestruct (H ...) as ..` which decides when to remove/keep the original hypothesis `H` was broken. I implemented this heuristic several years ago, but as !591 showed, the implementation was buggy and hence it was never actually triggered.
## The problem
Similar to Coq's `destruct`, the tactic `iDestruct (H ...) as ..` removes the original hypothesis `H` if there is no information loss caused by the destruct. For example:
- If `H : P ∨ Q`, then `H` **can** be removed without information loss after eliminating the disjunction. The results `P` and `Q` are strictly stronger than `P ∨ Q`.
- If `H : ∀ x, P x`, then `H` **cannot** be removed without information loss after eliminating the universal quantifier. After all, we might want to use `H` with a different value for `x`.
Note that in separation logic these considerations only apply if `H` is in the intuitionistic context, if `H` is in the spatial context, we cannot keep `H` either way.
## The old heuristic
The old heuristic, which was wrongly implemented and thus never triggered, looked at the type of `H` in `iDestruct (H ...) as ..`. If the type contains a universal quantifier, it would keep `H`, otherwise it would remove `H`. As the breakage of !591 showed, this heuristic had several problems:
- The hypothesis `H` might have type `P -∗ Q` and we might instantiate `H` with `HP : P` from the spatial context, i.e., `iDestruct ("H" with "HP") as "HQ"`. The old heuristic would remove `H`, which would lead to information loss because `HQ` ends up in the spatial context, while we might want to use `H` another time.
- The hypothesis `H` might have type `P ↔ Q` and we might instantiate `H` with `HP : P`, i.e., `iDestruct ("H" with "HP") as "HQ"`. The old heuristic would remove `H`, which would lead to information loss, because we might also want to use the other direction of the bi-implication.
## The new heuristic
The new heuristic is much simpler, it will only remove `H` when `iDestruct "H" as ..` does not eliminate any universal quantifiers or wands/implications. This is simply detected by checking whether `H` is an identifier or a proof mode term (i.e., if it contains `$!` or `with`).
## Backwards compatibility
The new heuristic is backwards compatible. It has the same behavior as we had prior to !589, but it removes a bunch of dead code, restructures some control flow, documents the current behavior, and adds some tests.
## Todo
We should update the proof mode documentation. I have not done that yet to avoid conflicts with !589.
## Credit
The tests are adapted from @tchajed's !591.Iris 3.4https://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests/637add better lemmas for working with mask-changing fupd, and rearrange names a bit2021-03-03T14:21:00ZRalf Jungjung@mpi-sws.orgadd better lemmas for working with mask-changing fupd, and rearrange names a bitA proposal for addressing parts of https://gitlab.mpi-sws.org/iris/iris/-/merge_requests/527 and https://gitlab.mpi-sws.org/iris/iris/-/merge_requests/615 without adding any new proofmode typeclass instances.
The heart of this MR are tw...A proposal for addressing parts of https://gitlab.mpi-sws.org/iris/iris/-/merge_requests/527 and https://gitlab.mpi-sws.org/iris/iris/-/merge_requests/615 without adding any new proofmode typeclass instances.
The heart of this MR are two new lemmas that can be conveniently `iApply`ed when having a mask-changing fupd as the goal:
```coq
(** Weaken the first mask of the goal from [E1] to [E2].
This lemma is intended to be [iApply]ed. *)
Lemma fupd_mask_weaken {E1} E2 {E3 P} :
E2 ⊆ E1 →
((|={E2,E1}=> emp) ={E2,E3}=∗ P) -∗ |={E1,E3}=> P.
(** Introduction lemma for a mask-changing fupd.
This lemma is intended to be [iApply]ed. *)
Lemma fupd_mask_intro E1 E2 P:
E2 ⊆ E1 →
((|={E2,E1}=> emp) -∗ P) -∗ |={E1,E2}=> P.
```
In particular the second lemma is easier to use than the old `fupd_mask_intro'` that was used for both the "weaken" and the "intro" case. For the "weaken" case, I think `fupd_mask_intro'`, with some adjustment to the implicit parameters, is still the better choice, so that's what this MR proposes.
Sadly, `fupd_mask_weaken` is already taken as a name, and so is `fupd_intro_mask` which is way too confusing when adding `fupd_mask_intro`. So this MR also renames some existing lemmas: `fupd_intro_mask` → `fupd_mask_intro_subseteq`, `fupd_intro_mask'` → `fupd_mask_subseteq`, `fupd_mask_weaken` → `fupd_mask_intro_discard`. Finally, it removes the unused `fupd_mask_same`.
We now have `fupd_mask_subseteq` in the fupd typeclass instead of the more general `fupd_mask_intro_subseteq`; the latter can be derived from the former.
All of these lemmas involve a subseteq but only some have it in the name -- so the names aren't really great, but I also feel like we reached a local maximum here.Iris 3.4https://gitlab.rts.mpi-sws.org/iris/iris/-/merge_requests/639finalize changelog for Iris 3.4 release2021-02-16T13:01:15ZRalf Jungjung@mpi-sws.orgfinalize changelog for Iris 3.4 releaseI picked some highlights, and in the detailed section (where appropriate) I split the enumeration between "new stuff" and "changed stuff".
@robbertkrebbers please let me know what you think.I picked some highlights, and in the detailed section (where appropriate) I split the enumeration between "new stuff" and "changed stuff".
@robbertkrebbers please let me know what you think.Iris 3.4