Hi,
What do you think about following patch? We had some discussion about this recently on the buildroot mailinglist. Any other use cases other than rpcbind/nfs-utils you can think of?
best regards Waldemar
Hello,
On Mon, 20 Mar 2017 18:21:45 +0100, Waldemar Brodkorb wrote:
What do you think about following patch? We had some discussion about this recently on the buildroot mailinglist. Any other use cases other than rpcbind/nfs-utils you can think of?
I had a quick look in Buildroot to see which packages currently only work with "native RPC" (i.e provided by the C library) and not with libtirpc. Notice that when I say don't work, I mean "not supported by Buildroot", which doesn't mean that the package cannot handle libtirpc, just that it is not done in Buildroot.
- autofs, depends on BR2_TOOLCHAIN_HAS_NATIVE_RPC only
- openvmtools, depends on BR2_TOOLCHAIN_HAS_NATIVE_RPC only
- portmap, obviously. But I believe we could get rid of this package entirely, it is deprecated and rpcbind is the official replacement.
- samba4, depends on BR2_TOOLCHAIN_HAS_NATIVE_RPC only
And that's it. So it leaves three packages to really analyze and see the impact.
Best regards,
Thomas
Thomas, All,
On 2017-03-20 20:51 +0100, Thomas Petazzoni spake thusly:
Hello,
On Mon, 20 Mar 2017 18:21:45 +0100, Waldemar Brodkorb wrote:
What do you think about following patch? We had some discussion about this recently on the buildroot mailinglist. Any other use cases other than rpcbind/nfs-utils you can think of?
I had a quick look in Buildroot to see which packages currently only work with "native RPC" (i.e provided by the C library) and not with libtirpc. Notice that when I say don't work, I mean "not supported by Buildroot", which doesn't mean that the package cannot handle libtirpc, just that it is not done in Buildroot.
- autofs, depends on BR2_TOOLCHAIN_HAS_NATIVE_RPC only
We have the homepage pointing to LFS.org: http://www.linuxfromscratch.org/blfs/view/svn/general/autofs.html
Autofs Dependencies Optional libtirpc-1.0.1 [...]
So we may be able to make it work with libtirpc in the end...
- openvmtools, depends on BR2_TOOLCHAIN_HAS_NATIVE_RPC only
And it is not available for uClibc toolchains. Making it glibc-only would not be too problematic, given the type of package this is.
- portmap, obviously. But I believe we could get rid of this package entirely, it is deprecated and rpcbind is the official replacement.
Maybe drop it, especially since nothing depends on or selects it.
- samba4, depends on BR2_TOOLCHAIN_HAS_NATIVE_RPC only
Given the size of the package, I doubt having it depends on glibc would be too bad.
And when glibc actually drops their internal rpc, then samba will have to add suopport for an alternative implementation.
Regards, Yann E. MORIN.
And that's it. So it leaves three packages to really analyze and see the impact.
Best regards,
Thomas
Thomas Petazzoni, CTO, Free Electrons Embedded Linux and Kernel engineering http://free-electrons.com
All,
On 2017-03-20 21:34 +0100, Yann E. MORIN spake thusly:
On 2017-03-20 20:51 +0100, Thomas Petazzoni spake thusly:
On Mon, 20 Mar 2017 18:21:45 +0100, Waldemar Brodkorb wrote:
What do you think about following patch? We had some discussion about this recently on the buildroot mailinglist. Any other use cases other than rpcbind/nfs-utils you can think of?
I had a quick look in Buildroot to see which packages currently only work with "native RPC" (i.e provided by the C library) and not with libtirpc.
[--SNIP--]
- samba4, depends on BR2_TOOLCHAIN_HAS_NATIVE_RPC only
Given the size of the package, I doubt having it depends on glibc would be too bad.
And when glibc actually drops their internal rpc, then samba will have to add suopport for an alternative implementation.
I had a quick look at bLFS again, and they recommend building against libtirpc:
http://www.linuxfromscratch.org/blfs/view/svn/basicnet/samba.html
So we may in the end be also able to switch entirely to using libtirpc.
Regards, Yann E. MORIN.
On 03/20/2017 10:21 AM, Waldemar Brodkorb wrote:
Hi,
What do you think about following patch? We had some discussion about this recently on the buildroot mailinglist. Any other use cases other than rpcbind/nfs-utils you can think of?
So there's a small downside to this - which I reckon will likely not stand out in grand scheme of things - still tabling it here for people to be aware of. This can potentially degrade some of the micro-benchmarks such as LMBench lat_proc shell - since there's now an additional shared library (libtirpc) to load. We spotted this back in 2015 when comparing buildroot (using libtirpc) vs. our homegrown builds (using native toolchain rpc).
-Vineet
Hello,
On Mon, 3 Apr 2017 15:53:28 -0700, Vineet Gupta wrote:
What do you think about following patch? We had some discussion about this recently on the buildroot mailinglist. Any other use cases other than rpcbind/nfs-utils you can think of?
So there's a small downside to this - which I reckon will likely not stand out in grand scheme of things - still tabling it here for people to be aware of. This can potentially degrade some of the micro-benchmarks such as LMBench lat_proc shell - since there's now an additional shared library (libtirpc) to load. We spotted this back in 2015 when comparing buildroot (using libtirpc) vs. our homegrown builds (using native toolchain rpc).
How big was the performance hit? Is it just due to loading an additional library, or to internal RPC implementation aspects that differs between the uClibc built-in implementation and the libtirpc implementation?
If it's just due to loading an additional library, is this micro benchmark really representative of real-world workload? Is it really a real life situation to have very short-lived processes that need RPC support?
Best regards,
Thomas
Hi Thomas,
On 04/03/2017 09:57 PM, Thomas Petazzoni wrote:
So there's a small downside to this - which I reckon will likely not stand out in grand scheme of things - still tabling it here for people to be aware of. This can potentially degrade some of the micro-benchmarks such as LMBench lat_proc shell - since there's now an additional shared library (libtirpc) to load. We spotted this back in 2015 when comparing buildroot (using libtirpc) vs. our homegrown builds (using native toolchain rpc).
How big was the performance hit?
Processor, Processes - times in microseconds - smaller is better ------------------------------------------------------------------------------ Host OS Mhz null null open slct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc --------- ------------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- A7-720mhz Linux 3.4.61 720 0.50 0.97 4.37 9.38 26.0 1.00 5.59 1103 5061 8432 A7-720mhz Linux 3.4.61 720 0.50 0.96 4.24 9.31 26.0 1.00 5.59 1093 5029 10.K
So ~19%
Is it just due to loading an additional library, or to internal RPC implementation aspects that differs between the uClibc built-in implementation and the libtirpc implementation?
So the thing is Busybox binary ends up being linked with 2 libs (instead of just libc.so) thus ldso loading takes more time. Now lat_proc shell ends up doing exec() of (busybox) shell and then hello world - hence the increased time.
If it's just due to loading an additional library, is this micro benchmark really representative of real-world workload? Is it really a real life situation to have very short-lived processes that need RPC support?
Hard to judge really - for a desktop/server system it probably doesn't matter - but for a typical embedded system ...
-Vineet
Best regards,
Thomas
Hi, Vineet Gupta wrote,
Hi Thomas,
On 04/03/2017 09:57 PM, Thomas Petazzoni wrote:
So there's a small downside to this - which I reckon will likely not stand out in grand scheme of things - still tabling it here for people to be aware of. This can potentially degrade some of the micro-benchmarks such as LMBench lat_proc shell - since there's now an additional shared library (libtirpc) to load. We spotted this back in 2015 when comparing buildroot (using libtirpc) vs. our homegrown builds (using native toolchain rpc).
How big was the performance hit?
Processor, Processes - times in microseconds - smaller is better
Host OS Mhz null null open slct sig sig fork exec sh call I/O stat clos TCP inst hndl proc proc proc
A7-720mhz Linux 3.4.61 720 0.50 0.97 4.37 9.38 26.0 1.00 5.59 1103 5061 8432 A7-720mhz Linux 3.4.61 720 0.50 0.96 4.24 9.31 26.0 1.00 5.59 1093 5029 10.K
So ~19%
Is it just due to loading an additional library, or to internal RPC implementation aspects that differs between the uClibc built-in implementation and the libtirpc implementation?
So the thing is Busybox binary ends up being linked with 2 libs (instead of just libc.so) thus ldso loading takes more time. Now lat_proc shell ends up doing exec() of (busybox) shell and then hello world - hence the increased time.
If it's just due to loading an additional library, is this micro benchmark really representative of real-world workload? Is it really a real life situation to have very short-lived processes that need RPC support?
Hard to judge really - for a desktop/server system it probably doesn't matter - but for a typical embedded system ...
You are not forced to use busybox with libtirpc. You only need it for NFS stuff. For modern NFS stack with rpcbind and nfs-utils you need libtirpc anyway. Busybox just implements a mount helper which could use internal uClibc RPC implementation. But in the end you need libtirpc on the embedded device.
So avoid busybox+libtirpc and use rpcbind+nfs-utils+libtirpc if you require NFS on your device.
I still see no real use case for the old internal RPC implementation on a modern embedded device.
best regards Waldemar
On 04/04/2017 01:44 PM, Waldemar Brodkorb wrote:
Hard to judge really - for a desktop/server system it probably doesn't matter - but for a typical embedded system ...
You are not forced to use busybox with libtirpc. You only need it for NFS stuff. For modern NFS stack with rpcbind and nfs-utils you need libtirpc anyway. Busybox just implements a mount helper which could use internal uClibc RPC implementation. But in the end you need libtirpc on the embedded device.
So avoid busybox+libtirpc and use rpcbind+nfs-utils+libtirpc if you require NFS on your device.
I still see no real use case for the old internal RPC implementation on a modern embedded device.
Right as I mentioned in my first post, I'm not arguing for/against, I'm just sharing an observation, which was subtle when I first ran into it ! This might not matter in the grand scheme of things !
-Vineet