[srsran-users] NAS Short MAC invalid issue

J Giovatto jgiovatto at adjacentlink.com
Wed Nov 16 14:33:22 UTC 2022


Hi

Yes it worked !!!

Thanks

Joe

On 11/16/22 05:43, Pedro Alvarez wrote:
> Hi Joe --
>
> This might be easier to fix than I had initially thought. The support 
> mechanisms to delay the COUNT increment are already in the code, so it 
> might be a one-liner really.
> Could you try the attached patch?
>
> On Tue, Nov 15, 2022 at 5:06 PM Pedro Alvarez <pedro.alvarez at srs.io> 
> wrote:
>
>     I think I was able to replicate this. I used Joe's patch and my
>     own patch to improve the logging.
>     See the information below:
>
>     ue.log
>     ```
>     2022-11-15T16:44:39.769999 [NAS    ] [I] Service Request with
>     cause mo-Data.
>     2022-11-15T16:44:39.770014 [NAS    ] [I] NAS is already registered
>     but RRC disconnected. Connecting now...
>     2022-11-15T16:44:39.770016 [NAS    ] [I] Generating service request
>     2022-11-15T16:44:39.770417 [NAS    ] [D] Generated MAC-I. *COUNT=73*
>         0000: c5 48 *13 41*
>     2022-11-15T16:44:39.770420 [NAS    ] [D] K_nas_int (128)
>         0000: *45 a7 40 0a af 30 73 7c df 0e a0 84 87 51 4e 28*
>     2022-11-15T16:44:39.770421 [NAS    ] [D] NAS PDU
>         0000: *c7 09*
>     ...
>     2022-11-15T16:44:40.273606 [NAS    ] [I] Received service reject
>     with EMM cause=0x9 and t3446=0
>     ```
>     epc.log
>     ```
>     2022-11-15T16:44:40.257264 [NAS    ] [W] Short integrity check
>     failure. Local: *estimated count=9*, [30 60 34 c1], Received:
>     count=9, [*13 41*]
>     2022-11-15T16:44:40.257265 [NAS    ] [W] K_nas_int
>         0000: 80 6f 82 91 f7 04 21 21 95 73 91 d5 49 bb 2a a4
>         0010: *45 a7 40 0a af 30 73 7c df 0e a0 84 87 51 4e 28*
>     2022-11-15T16:44:40.257267 [NAS    ] [W] NAS PDU
>         0000: *c7 09* 13 41
>     2022-11-15T16:44:40.257272 [NAS    ] [I] Service Request -- Short
>     MAC invalid
>     2022-11-15T16:44:40.257324 [NAS    ] [W] Service Request -- Short
>     MAC invalid. Sending service reject.
>     2022-11-15T16:44:40.257326 [NAS    ] [I] Service Reject --
>     eNB_UE_S1AP_ID 2 MME_UE_S1AP_ID 2.
>     ```
>
>     So the message with the Short MAC-I of [0x13, 0x41], has a COUNT
>     of 73 from the UE's perspective and 9 from the MME's perspective.
>     This is because the UE tried to send multiple NAS messages, all of
>     which failed the RRC Connect. I believe that the short MAC only
>     uses 5 bits for the COUNT, so we can only lose 32 messages.
>
>     I think the fix for this should be to increment the COUNT only
>     after the RRC connection completes, like the standard says.
>
>
>     On Tue, Nov 15, 2022 at 4:22 PM Pedro Alvarez
>     <pedro.alvarez at srs.io> wrote:
>
>         Accidentally did not hit "reply all", sorry Joe for getting
>         this email twice...
>         ---
>
>         I think the quickest way to get to the bottom of this is to
>         take a systematic approach to narrow down the problem.
>         My approach when facing an integrity issue is usually the
>         following:
>
>         1 - Double check the message. This is usually the most common
>         issue, the message itself gets corrupted and  integrity fails
>         because of this.
>         2 - Double check the keys. If they don't match then check the
>         inputs for them.
>         3 - Double check the MAC-I parameter inputs.
>
>         While the logging in the PDCP is made to get this information
>         quickly, the NAS needs some patches to get this info.
>         Could you try out this patch please Joe? Both the EPC and UE
>         NAS at debug, everything else at warning.
>
>         Note also, that the count printed by the EPC is estimated, so
>         it is possible that there is still a mismatch in the count.
>         We need to check the UE's and the EPC's logs to rule that out.
>
>         On Tue, Nov 15, 2022 at 3:20 PM J Giovatto
>         <jgiovatto at adjacentlink.com> wrote:
>
>             On 11/15/22 05:19, Andre Puschmann wrote:
>             > Hey,
>             >
>             > On 14/11/22 17:02, J Giovatto wrote:
>             >> Yes running zmq, I have confirmed that earfcn = 2850
>             and only earfcn
>             >> = 2850 is in play.
>             >
>             > Ok, good.
>             >
>             >>
>             >> I have attached a patch for zmq that will create a 5
>             min DL blackout
>             >> that you can try.
>             >
>             > Well, 5min, this is long and likely longer than the max
>             time the UE
>             > will do a reestablishment. But that depends on your
>             configs of course.
>             Ill try to find the min duration for this to happen but
>             5min is a sure
>             thing.
>             >
>             > For this to replicate I would appreciate you open an
>             issue in github
>             > and post all configs and full logs. Also instead of
>             patching the code
>             > could you try to replicate this with the channel
>             emulator, there is a
>             > RLF option that should do exactly what you need.
>
>
>             I tried to set the enb to do just that but the link never
>             broke thinking
>             30 on and 300 sec off
>
>             [channel.dl.rlf]
>             enable        = true
>             t_on_ms       = 30000
>             t_off_ms      = 300000
>
>             >
>             > This would also be helpful to do a git-bisect in case
>             it's a
>             > regression introduced by a commit.
>
>
>             Sure thing, Thanks
>
>             >
>             > Thanks
>             > Andre
>             >
>             >>
>             >> Seems like it ony happens with UL pending traffic (ping
>             the epc).
>             >>
>             >> Thanks for looking.
>             >>
>             >>
>             >> Joe
>             >>
>             >>
>             >>>
>             >>>
>             >>>> Maybe this is related to issue #960 ?
>             >>>
>             >>> I don't think it is related to
>             >>> https://github.com/srsran/srsRAN/issues/960
>             >>>
>             >>> But have a look at the above and we check further if
>             needed.
>             >>>
>             >>> Thanks
>             >>> Andre
>             >>
>             >>
>             >
>
>             _______________________________________________
>             srsran-users mailing list
>             srsran-users at lists.srsran.com
>             https://lists.srsran.com/mailman/listinfo/srsran-users
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.srsran.com/pipermail/srsran-users/attachments/20221116/9f23d9e9/attachment.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <https://lists.srsran.com/pipermail/srsran-users/attachments/20221116/9f23d9e9/attachment.sig>


More information about the srsran-users mailing list