Discussion:
[fedora-arm] armhf dnf is not working on aarch64 kernel
Chanho Park
2016-04-27 12:18:05 UTC
Permalink
Hi all,

I want to use the armhf fedora rootfs on the aarch64 bit kernel.
When I ran the dnf command on the armhf image with aarch64 kernel, the
dnf command was failed with below error.

dnf -v install mesa
cachedir: /var/cache/dnf
DNF version: 1.1.6
Failed to synchronize cache for repo 'rpmfusion-free-updates' from
'http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-updates-released-22&arch=aarch64':
Cannot prepare internal mirrorlist: No URLs in mirrorlist, disabling.
repo: using cache for: fedora
not found updateinfo for: Fedora 22 - aarch64
repo: using cache for: updates
not found deltainfo for: Fedora 22 - aarch64 - Updates
not found updateinfo for: Fedora 22 - aarch64 - Updates

Actually, armhf binaries/rootfs can be executed even aarch64
kernel(fully compatible with armhf).
Maybe the dnf command tries to find its repo from uname call.
$ uname -m
aarch64

The rpm install was also failed because fedora doesn't have any rpm
platform file
So, I added below file. It is able to install armhf rpm file even
aarch64 kernel.

cat /etc/rpm/platform
armv7hl-fedora-linux-gnu

The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
--
Best Regards,
Chanho Park
Peter Robinson
2016-04-27 12:37:35 UTC
Permalink
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Post by Chanho Park
When I ran the dnf command on the armhf image with aarch64 kernel, the
dnf command was failed with below error.
dnf -v install mesa
cachedir: /var/cache/dnf
DNF version: 1.1.6
Failed to synchronize cache for repo 'rpmfusion-free-updates' from
Cannot prepare internal mirrorlist: No URLs in mirrorlist, disabling.
repo: using cache for: fedora
not found updateinfo for: Fedora 22 - aarch64
repo: using cache for: updates
not found deltainfo for: Fedora 22 - aarch64 - Updates
not found updateinfo for: Fedora 22 - aarch64 - Updates
Actually, armhf binaries/rootfs can be executed even aarch64
kernel(fully compatible with armhf).
Maybe the dnf command tries to find its repo from uname call.
$ uname -m
aarch64
The rpm install was also failed because fedora doesn't have any rpm
platform file
So, I added below file. It is able to install armhf rpm file even
aarch64 kernel.
cat /etc/rpm/platform
armv7hl-fedora-linux-gnu
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.

Peter
Chanho Park
2016-04-27 13:09:15 UTC
Permalink
Hi Peter,
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64
kernel.
Post by Peter Robinson
Post by Chanho Park
When I ran the dnf command on the armhf image with aarch64 kernel, the
dnf command was failed with below error.
dnf -v install mesa
cachedir: /var/cache/dnf
DNF version: 1.1.6
Failed to synchronize cache for repo 'rpmfusion-free-updates' from
'
http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-updates-released-22&arch=aarch64
Post by Chanho Park
Cannot prepare internal mirrorlist: No URLs in mirrorlist, disabling.
repo: using cache for: fedora
not found updateinfo for: Fedora 22 - aarch64
repo: using cache for: updates
not found deltainfo for: Fedora 22 - aarch64 - Updates
not found updateinfo for: Fedora 22 - aarch64 - Updates
Actually, armhf binaries/rootfs can be executed even aarch64
kernel(fully compatible with armhf).
Maybe the dnf command tries to find its repo from uname call.
$ uname -m
aarch64
The rpm install was also failed because fedora doesn't have any rpm
platform file
So, I added below file. It is able to install armhf rpm file even
aarch64 kernel.
cat /etc/rpm/platform
armv7hl-fedora-linux-gnu
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.

Best Regards,
Chanho Park
--
Best Regards,
Chanho Park
Gordan Bobic
2016-04-27 13:15:26 UTC
Permalink
Post by Chanho Park
Hi Peter,
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64
kernel.
Post by Peter Robinson
Post by Chanho Park
When I ran the dnf command on the armhf image with aarch64 kernel,
the
Post by Chanho Park
dnf command was failed with below error.
dnf -v install mesa
cachedir: /var/cache/dnf
DNF version: 1.1.6
Failed to synchronize cache for repo 'rpmfusion-free-updates' from
Cannot prepare internal mirrorlist: No URLs in mirrorlist,
disabling.
Post by Chanho Park
repo: using cache for: fedora
not found updateinfo for: Fedora 22 - aarch64
repo: using cache for: updates
not found deltainfo for: Fedora 22 - aarch64 - Updates
not found updateinfo for: Fedora 22 - aarch64 - Updates
Actually, armhf binaries/rootfs can be executed even aarch64
kernel(fully compatible with armhf).
Maybe the dnf command tries to find its repo from uname call.
$ uname -m
aarch64
The rpm install was also failed because fedora doesn't have any
rpm
Post by Chanho Park
platform file
So, I added below file. It is able to install armhf rpm file even
aarch64 kernel.
cat /etc/rpm/platform
armv7hl-fedora-linux-gnu
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully
compatible armv7) binaries. For example, the kernel of raspberry pi 3
is aarch64 and fedora arm version can't run on rpi3. Even all binaries
can run on it but only dnf command can't do that.
I can confirm that armv7hl (and armv5tel) userspace DOES work in
a chroot on aarch64 provided you use a sensible kernel (specifically
meaning that it uses 4KB memory pages rather than 64KB memory pages).
I am running CentOS 7 armv7hl in a chroot and LXC/docker containers
on CentOS 7 aarch64 (with aarch64 kernel configured for 4KB pages
and backward compatibility enabled).

So if the above doesn't work (with /etc/rpm/platform configured),
it has to be considered a bug.

Gordan
Peter Robinson
2016-04-27 13:17:19 UTC
Permalink
Post by Chanho Park
Hi Peter,
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64 kernel.
Not exactly actually, it's possible to have aarch64/ARMv8 CPUs that
don't have the 32 bit components.
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.
Actually that isn't entirely true. The kernel that's currently shipped
in raspbian for RPi3 is actually an ARMv7 kernel where the firmware
boots the ARM cores as v7 cores. The kernel code that's running there
is ARMv7 code not cortex-a53 code paths. That is a fairly special
usecase and you can actually do that on Fedora ARMv7 with a Fedora
ARMv7 kernel, not a aarch64 kernel.

ARM multilib is something we explicitly decided not to support when we
were dealing with that. Multilib is a mess on x86, it's not a mess we
need on ARM.

Peter
Gordan Bobic
2016-04-27 13:28:38 UTC
Permalink
Post by Peter Robinson
Post by Chanho Park
Hi Peter,
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64 kernel.
Not exactly actually, it's possible to have aarch64/ARMv8 CPUs that
don't have the 32 bit components.
That this is not the case here, though. So the answer doesn't seem
to answer the question.
Post by Peter Robinson
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully
compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.
Actually that isn't entirely true. The kernel that's currently shipped
in raspbian for RPi3 is actually an ARMv7 kernel where the firmware
boots the ARM cores as v7 cores. The kernel code that's running there
is ARMv7 code not cortex-a53 code paths. That is a fairly special
usecase and you can actually do that on Fedora ARMv7 with a Fedora
ARMv7 kernel, not a aarch64 kernel.
Again, this doesn't answer the question and ignored the most obvious
and common case, where we have ARMv8 hardware (e.g. X-Gene) that fully
supports ARMv7 instruction set for backward compatibility, running an
aarch64 kernel with 4KB memory pages (the sensible size) and backward
compatibility option enabled (no detriment to doing so).

You can then run an armv7hl (or armv5tel) userspace in a chroot with
no ill effects. I run a setup like this where I run hard-float and
soft-float 32-bit userspace docker containers even though the host
userspace and kernels are aarch64. I see no sane reason why this
would not be a supported configuration, since the usefulness of it
seems very obvious.
Post by Peter Robinson
ARM multilib is something we explicitly decided not to support when we
were dealing with that. Multilib is a mess on x86, it's not a mess we
need on ARM.
We aren't talking about multilib here, we are talking about
chroots/containers which are a very separate use case.

But since you mentioned it, if we don't have multilib because it's
a mess, why is it that aarch64 Fedora still uses lib64 rather than
lib directories? I don't see how this is less of a mess than what
we have on x86 at all. It seems we have just as big a mess but
without the feature benefits.

Gordan
Peter Robinson
2016-04-27 13:45:52 UTC
Permalink
Post by Gordan Bobic
Post by Peter Robinson
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64 kernel.
Not exactly actually, it's possible to have aarch64/ARMv8 CPUs that
don't have the 32 bit components.
That this is not the case here, though. So the answer doesn't seem
to answer the question.
Actually it does answer the question. The question is about "32 bit
binaries on aarch64" and there are cases when they can't run.
Post by Gordan Bobic
Post by Peter Robinson
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.
Actually that isn't entirely true. The kernel that's currently shipped
in raspbian for RPi3 is actually an ARMv7 kernel where the firmware
boots the ARM cores as v7 cores. The kernel code that's running there
is ARMv7 code not cortex-a53 code paths. That is a fairly special
usecase and you can actually do that on Fedora ARMv7 with a Fedora
ARMv7 kernel, not a aarch64 kernel.
Again, this doesn't answer the question and ignored the most obvious
and common case, where we have ARMv8 hardware (e.g. X-Gene) that fully
supports ARMv7 instruction set for backward compatibility, running an
aarch64 kernel with 4KB memory pages (the sensible size) and backward
compatibility option enabled (no detriment to doing so).
What about Seattle that explicitly needs a kernel with 64K pages?
Post by Gordan Bobic
You can then run an armv7hl (or armv5tel) userspace in a chroot with
no ill effects. I run a setup like this where I run hard-float and
soft-float 32-bit userspace docker containers even though the host
userspace and kernels are aarch64. I see no sane reason why this
would not be a supported configuration, since the usefulness of it
seems very obvious.
Sure, but we made a decision that our kernels would be 64K pages on
aarch64 some time ago. We need to make a decision that is not easy to
change to be compatible moving forward for a new architecture that is
evolving. Sure right at THIS VERY MOMENT the hardware that YOU'RE
using might support that configuration but there is HW that is
available right now that needs a configuration that you don't
currently have and there could well be HW soon (I don't know, and if I
did it's very likely I couldn't comment anyway) that doesn't have the
optional bits needed for aarch32.

So while it's easy to say "it works for me" sure, you can hack things
how ever you like and sit back from the edges commenting, we need to
make a decisions such as page sizes that we'll need to support for
years to come based on the information we have at the time. Other
distros have made similar decisions, others haven't, that's their
choices.

Either way if you need to build a custom kernel for a ARMv7 userspace
it's up to you and that use case is fine, it's your decision to make.
Post by Gordan Bobic
Post by Peter Robinson
ARM multilib is something we explicitly decided not to support when we
were dealing with that. Multilib is a mess on x86, it's not a mess we
need on ARM.
We aren't talking about multilib here, we are talking about
chroots/containers which are a very separate use case.
It might be a separate usecase but it doesn't make it less relevant
for reasons why we don't support the usecase.
Post by Gordan Bobic
But since you mentioned it, if we don't have multilib because it's
a mess, why is it that aarch64 Fedora still uses lib64 rather than
lib directories? I don't see how this is less of a mess than what
Because lib64 is a directory that all current 64 bit architectures
that Fedora supports uses. It's about consistency rather than
multilib.
Gordan Bobic
2016-04-27 14:33:49 UTC
Permalink
Post by Peter Robinson
Post by Gordan Bobic
Post by Peter Robinson
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64 kernel.
Not exactly actually, it's possible to have aarch64/ARMv8 CPUs that
don't have the 32 bit components.
That this is not the case here, though. So the answer doesn't seem
to answer the question.
Actually it does answer the question. The question is about "32 bit
binaries on aarch64" and there are cases when they can't run.
And that justified not caring about all other cases?
Post by Peter Robinson
Post by Gordan Bobic
Post by Peter Robinson
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.
Actually that isn't entirely true. The kernel that's currently shipped
in raspbian for RPi3 is actually an ARMv7 kernel where the firmware
boots the ARM cores as v7 cores. The kernel code that's running there
is ARMv7 code not cortex-a53 code paths. That is a fairly special
usecase and you can actually do that on Fedora ARMv7 with a Fedora
ARMv7 kernel, not a aarch64 kernel.
Again, this doesn't answer the question and ignored the most obvious
and common case, where we have ARMv8 hardware (e.g. X-Gene) that fully
supports ARMv7 instruction set for backward compatibility, running an
aarch64 kernel with 4KB memory pages (the sensible size) and backward
compatibility option enabled (no detriment to doing so).
What about Seattle that explicitly needs a kernel with 64K pages?
Does one broken implementation justify a decision detrimental to
all other uses? Linus himself has had some choice words over time
about defaulting to large pages:

http://yarchive.net/comp/linux/page_sizes.html

Are there some specific new developments that fundamentally
deprecate those words of wisdom?

Additionally, are you saying that 64KB page size support
is mandatory on ARMv8 but 4KB is not? I cannot seem to find
any documentation from ARM stating this explicitly, the closest
I can find is that either/both can be available.
Post by Peter Robinson
Post by Gordan Bobic
You can then run an armv7hl (or armv5tel) userspace in a chroot with
no ill effects. I run a setup like this where I run hard-float and
soft-float 32-bit userspace docker containers even though the host
userspace and kernels are aarch64. I see no sane reason why this
would not be a supported configuration, since the usefulness of it
seems very obvious.
Sure, but we made a decision that our kernels would be 64K pages on
aarch64 some time ago. We need to make a decision that is not easy to
change to be compatible moving forward for a new architecture that is
evolving. Sure right at THIS VERY MOMENT the hardware that YOU'RE
using might support that configuration but there is HW that is
available right now that needs a configuration that you don't
currently have and there could well be HW soon (I don't know, and if I
did it's very likely I couldn't comment anyway) that doesn't have the
optional bits needed for aarch32.
And the advantage of not anti-supporting it while and where
it is available is?
Post by Peter Robinson
So while it's easy to say "it works for me" sure, you can hack things
how ever you like and sit back from the edges commenting, we need to
make a decisions such as page sizes that we'll need to support for
years to come based on the information we have at the time. Other
distros have made similar decisions, others haven't, that's their
choices.
Decisions to support for years on a distro with an estimated 13
month shelf life?
Post by Peter Robinson
Either way if you need to build a custom kernel for a ARMv7 userspace
it's up to you and that use case is fine, it's your decision to make.
Post by Gordan Bobic
Post by Peter Robinson
ARM multilib is something we explicitly decided not to support when we
were dealing with that. Multilib is a mess on x86, it's not a mess we
need on ARM.
We aren't talking about multilib here, we are talking about
chroots/containers which are a very separate use case.
It might be a separate usecase but it doesn't make it less relevant
for reasons why we don't support the usecase.
If it is because ARM32 support is optional in ARMv8, then why
bring up multilib at all?
Post by Peter Robinson
Post by Gordan Bobic
But since you mentioned it, if we don't have multilib because it's
a mess, why is it that aarch64 Fedora still uses lib64 rather than
lib directories? I don't see how this is less of a mess than what
Because lib64 is a directory that all current 64 bit architectures
that Fedora supports uses. It's about consistency rather than
multilib.
How is dnf being the seemingly only thing that breaks in this
case while all other binaries work fine in line with the "consistency"
argument?

Gordan
Chanho Park
2016-04-27 13:31:09 UTC
Permalink
Hi,
Post by Chanho Park
Post by Chanho Park
Hi Peter,
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64
kernel.
Not exactly actually, it's possible to have aarch64/ARMv8 CPUs that
don't have the 32 bit components.
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.
Actually that isn't entirely true. The kernel that's currently shipped
in raspbian for RPi3 is actually an ARMv7 kernel where the firmware
boots the ARM cores as v7 cores. The kernel code that's running there
is ARMv7 code not cortex-a53 code paths. That is a fairly special
usecase and you can actually do that on Fedora ARMv7 with a Fedora
ARMv7 kernel, not a aarch64 kernel.
Ah. Sorry. It's not good example. Actually, rpi3 try to enable aarch64
kernel but they can't di that lack of time. (
https://www.linux.com/news/raspberry-pi-3-still-essentially-32-bit-sbc-now)
Pine64 is also available cheap board which kernel is aarch64 version.
Post by Chanho Park
ARM multilib is something we explicitly decided not to support when we
were dealing with that. Multilib is a mess on x86, it's not a mess we
need on ARM.
No. It's not multilib. I want to run _only_ armv7 binaries and libraries. I
want to know why only 'dnf' is impossible do that.

Best Regards,
Chanho Park
--
Best Regards,
Chanho Park
Dan Horák
2016-04-27 13:47:45 UTC
Permalink
On Wed, 27 Apr 2016 22:31:09 +0900
Post by Chanho Park
Hi,
Post by Chanho Park
Post by Chanho Park
Hi Peter,
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64
kernel.
Not exactly actually, it's possible to have aarch64/ARMv8 CPUs that
don't have the 32 bit components.
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.
Actually that isn't entirely true. The kernel that's currently shipped
in raspbian for RPi3 is actually an ARMv7 kernel where the firmware
boots the ARM cores as v7 cores. The kernel code that's running there
is ARMv7 code not cortex-a53 code paths. That is a fairly special
usecase and you can actually do that on Fedora ARMv7 with a Fedora
ARMv7 kernel, not a aarch64 kernel.
Ah. Sorry. It's not good example. Actually, rpi3 try to enable aarch64
kernel but they can't di that lack of time. (
https://www.linux.com/news/raspberry-pi-3-still-essentially-32-bit-sbc-now)
Pine64 is also available cheap board which kernel is aarch64 version.
Post by Chanho Park
ARM multilib is something we explicitly decided not to support when we
were dealing with that. Multilib is a mess on x86, it's not a mess we
need on ARM.
No. It's not multilib. I want to run _only_ armv7 binaries and libraries. I
want to know why only 'dnf' is impossible do that.
leaving the Fedora design question out, it's because dnf does not
consider aarch64 and armhfp as compatible arches. That's something dnf
project can eventually solve.

If I understand the use-case, it's the same as one would run full i686
userspace with x86_64 kernel (s390 on s390x kernel). I think there were
such attempts.


Dan
Peter Robinson
2016-04-27 13:50:16 UTC
Permalink
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
Why not? All arm binaries can be runnable on aarch32 mode of aarch64 kernel.
Not exactly actually, it's possible to have aarch64/ARMv8 CPUs that
don't have the 32 bit components.
Post by Chanho Park
Post by Peter Robinson
Post by Chanho Park
The question is 'how can I run 'dnf' command on armhf fedora with
aarch64 kernel?'
No, the ARMv7 and aarch64 ABI aren't compatible, the only way we
support ARMv7 on aarch64 is via virtualisation. We will not be
supporting this or a "multilib" usecase.
The aarch64 kernel can execute both aarch64 and aarch32(fully compatible
armv7) binaries. For example, the kernel of raspberry pi 3 is aarch64 and
fedora arm version can't run on rpi3. Even all binaries can run on it but
only dnf command can't do that.
Actually that isn't entirely true. The kernel that's currently shipped
in raspbian for RPi3 is actually an ARMv7 kernel where the firmware
boots the ARM cores as v7 cores. The kernel code that's running there
is ARMv7 code not cortex-a53 code paths. That is a fairly special
usecase and you can actually do that on Fedora ARMv7 with a Fedora
ARMv7 kernel, not a aarch64 kernel.
Ah. Sorry. It's not good example. Actually, rpi3 try to enable aarch64
kernel but they can't di that lack of time.
(https://www.linux.com/news/raspberry-pi-3-still-essentially-32-bit-sbc-now)
Pine64 is also available cheap board which kernel is aarch64 version.
Post by Peter Robinson
ARM multilib is something we explicitly decided not to support when we
were dealing with that. Multilib is a mess on x86, it's not a mess we
need on ARM.
No. It's not multilib. I want to run _only_ armv7 binaries and libraries. I
want to know why only 'dnf' is impossible do that.
Because we don't support it. For example the aarch64 kernels have a
64K page size, ARMv7 uses 4K. So you'd need a custom kernel before you
even start so you may as well do a custom image.
Dennis Gilmore
2016-04-27 15:38:22 UTC
Permalink
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
To further this piece, you would need to have code changes in rpm, dnf, yum,
packagekit, mock and everything else dealing with rpm installation and
removal. none of the tooling supports what you are asking.

Some aarch64 hardware will not run 32 bit binaries at all. when we started on
the path of supporting aarch64 we mad a concious decision not to support
running armhfp or arm 32 bit binaries on 64 bit environments. the supported
way to run 32 bit binaries is to do so in a 32 bit vm.

Dennis
Gordan Bobic
2016-04-27 15:47:55 UTC
Permalink
Post by Dennis Gilmore
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
To further this piece, you would need to have code changes in rpm, dnf, yum,
packagekit, mock and everything else dealing with rpm installation and
removal. none of the tooling supports what you are asking.
That must be some very recent code. I can confirm that CentOS 7
armv7hl works just fine with just the /etc/rpm/platform configured
appropriately in the chroot on an aarch64 host (with a non-default
kernel built with 4KB pages). No dnf, granted, since that is more
recent than F19, but all the rest of it works just fine.

So unless there has been a lot of bit rot since F19, it seems
unlikely any of the rest of it would need fixing.
Post by Dennis Gilmore
Some aarch64 hardware will not run 32 bit binaries at all. when we started on
the path of supporting aarch64 we mad a concious decision not to support
running armhfp or arm 32 bit binaries on 64 bit environments. the supported
way to run 32 bit binaries is to do so in a 32 bit vm.
Unless I am missing something, even ignoring the very non-trivial
performance hit of running in a VM, if the hardware doesn't support
the 32-bit instruction set, then the VMs won't work either, so I'm
not sure what the point being made here is.

Gordan
Peter Robinson
2016-04-27 15:56:12 UTC
Permalink
Post by Gordan Bobic
Post by Dennis Gilmore
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
To further this piece, you would need to have code changes in rpm, dnf, yum,
packagekit, mock and everything else dealing with rpm installation and
removal. none of the tooling supports what you are asking.
That must be some very recent code. I can confirm that CentOS 7
armv7hl works just fine with just the /etc/rpm/platform configured
appropriately in the chroot on an aarch64 host (with a non-default
kernel built with 4KB pages). No dnf, granted, since that is more
recent than F19, but all the rest of it works just fine.
Maybe that's something that CentOS have added (don't know, haven't
looked), RHELSA doesn't support it that I'm aware of and they're
definitely only 64K page size. The biggest change is in rpm and the
arch mappings there.
Post by Gordan Bobic
So unless there has been a lot of bit rot since F19, it seems
unlikely any of the rest of it would need fixing.
Maybe, early Fedora on aarch64 was 4K pages during bringup but it
became clear early on that various orgs wanted 64K pages so the
decision was made to move.
Post by Gordan Bobic
Post by Dennis Gilmore
Some aarch64 hardware will not run 32 bit binaries at all. when we started on
the path of supporting aarch64 we mad a concious decision not to support
running armhfp or arm 32 bit binaries on 64 bit environments. the supported
way to run 32 bit binaries is to do so in a 32 bit vm.
Unless I am missing something, even ignoring the very non-trivial
performance hit of running in a VM, if the hardware doesn't support
the 32-bit instruction set, then the VMs won't work either, so I'm
not sure what the point being made here is.
Yes, because the instructions can be dealt with by the hypervisor
whether through emulation, or some other mechanism.
Gordan Bobic
2016-04-27 16:04:38 UTC
Permalink
Post by Peter Robinson
Post by Gordan Bobic
Post by Dennis Gilmore
Post by Peter Robinson
Post by Chanho Park
Hi all,
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
You can't, it's not a use case we support.
To further this piece, you would need to have code changes in rpm,
dnf,
yum,
packagekit, mock and everything else dealing with rpm installation and
removal. none of the tooling supports what you are asking.
That must be some very recent code. I can confirm that CentOS 7
armv7hl works just fine with just the /etc/rpm/platform configured
appropriately in the chroot on an aarch64 host (with a non-default
kernel built with 4KB pages). No dnf, granted, since that is more
recent than F19, but all the rest of it works just fine.
Maybe that's something that CentOS have added (don't know, haven't
looked), RHELSA doesn't support it that I'm aware of and they're
definitely only 64K page size. The biggest change is in rpm and the
arch mappings there.
They might not support it, but it most certainly works. There are no
changes specific to this that I can find in CentOS. All I changed was
rebuilt the host kernel with 4KB pages and ARM32 support (still an
aarch64 kernel). C7 armv7hl guest is completely unmodified apart from
the /etc/rpm/platform being set explicitly.

The main point being that the original assertion that making this
work would require rpm, yum, packagekit, mock and other code changes
doesn't seem to be correct based on empirical evidence.
Post by Peter Robinson
Post by Gordan Bobic
So unless there has been a lot of bit rot since F19, it seems
unlikely any of the rest of it would need fixing.
Maybe, early Fedora on aarch64 was 4K pages during bringup but it
became clear early on that various orgs wanted 64K pages so the
decision was made to move.
Despite Linus' words of wisdom to the contrary over the years. :-(
Post by Peter Robinson
Post by Gordan Bobic
Post by Dennis Gilmore
Some aarch64 hardware will not run 32 bit binaries at all. when we
started
on
the path of supporting aarch64 we mad a concious decision not to support
running armhfp or arm 32 bit binaries on 64 bit environments. the supported
way to run 32 bit binaries is to do so in a 32 bit vm.
Unless I am missing something, even ignoring the very non-trivial
performance hit of running in a VM, if the hardware doesn't support
the 32-bit instruction set, then the VMs won't work either, so I'm
not sure what the point being made here is.
Yes, because the instructions can be dealt with by the hypervisor
whether through emulation, or some other mechanism.
If it's going to run in emulation you might as well run it on
some highest end possible x86 hardware, it'll be slightly less
excruciatingly slow. And last I checked, that still had issues
with availability of kernels and architectures emulated.

Gordan
John Dulaney
2016-04-27 18:12:28 UTC
Permalink
Post by Gordan Bobic
Post by Peter Robinson
Maybe that's something that CentOS have added (don't know, haven't
looked), RHELSA doesn't support it that I'm aware of and they're
definitely only 64K page size. The biggest change is in rpm and the
arch mappings there.
They might not support it, but it most certainly works. There are no
changes specific to this that I can find in CentOS. All I changed was
rebuilt the host kernel with 4KB pages and ARM32 support (still an
aarch64 kernel). C7 armv7hl guest is completely unmodified apart from
the /etc/rpm/platform being set explicitly.
The main point being that the original assertion that making this
work would require rpm, yum, packagekit, mock and other code changes
doesn't seem to be correct based on empirical evidence.
It may work with rpm, but, as per the original post, dnf does not
support it, and dnf should not support it as long as Fedora
does not support a 32 bit userspace on aarch64.
Post by Gordan Bobic
Despite Linus' words of wisdom to the contrary over the years. :-(
Linus is not God, and we'd rather support as broad as possible a
range of hardware.
Post by Gordan Bobic
Post by Peter Robinson
Yes, because the instructions can be dealt with by the hypervisor
whether through emulation, or some other mechanism.
If it's going to run in emulation you might as well run it on
some highest end possible x86 hardware, it'll be slightly less
excruciatingly slow. And last I checked, that still had issues
with availability of kernels and architectures emulated.
Actually, with kvm, you get pretty much the same speed as native
aarch64 vms. Also, server grade aarch64 h/w will give you
pretty decent performance. I'm less sure about SBCs; they're
dependent on the SoC used.

On the whole, 32 bit arm vms are going to have the same
performance on aarch64 as i686 on x86_64.

John.
Gordan Bobic
2016-04-27 19:39:21 UTC
Permalink
Post by John Dulaney
Post by Gordan Bobic
Post by Peter Robinson
Maybe that's something that CentOS have added (don't know, haven't
looked), RHELSA doesn't support it that I'm aware of and they're
definitely only 64K page size. The biggest change is in rpm and the
arch mappings there.
They might not support it, but it most certainly works. There are no
changes specific to this that I can find in CentOS. All I changed was
rebuilt the host kernel with 4KB pages and ARM32 support (still an
aarch64 kernel). C7 armv7hl guest is completely unmodified apart from
the /etc/rpm/platform being set explicitly.
The main point being that the original assertion that making this
work would require rpm, yum, packagekit, mock and other code changes
doesn't seem to be correct based on empirical evidence.
It may work with rpm, but, as per the original post, dnf does not
support it, and dnf should not support it as long as Fedora
does not support a 32 bit userspace on aarch64.
That sounds an awful lot like trying to justify what is arguably
a bug in dnf, specifically, that unlike yum that it is replacing,
it ignores /etc/rpm/platform.
Post by John Dulaney
Post by Gordan Bobic
Despite Linus' words of wisdom to the contrary over the years. :-(
Linus is not God, and we'd rather support as broad as possible a
range of hardware.
1) He is not God, but he is smarter and better informed than
most on the subject at hand. Can you link to an overwhelming
counter-argument from someone even remotely similarly qualified?

2) Nobody has yet pointed at ARM's own documentation (I did ask
earlier) that says that 4KB memory page support is optional
rather than mandatory.

The closest I can find on this from ARM are the following:
http://infocenter.arm.com/help/topic/com.arm.doc.den0024a/ch12s04.html
https://www.arm.com/files/downloads/ARMv8_white_paper_v5.pdf
https://www.arm.com/files/downloads/ARMv8_Architecture.pdf
and all of the above list that ARMv8 should support both 4KB
and 64KB memory pages.

And if 4KB support is in fact mandatory, then arguably the
decision to opt for 64KB for the sake of supporting Seattle was
based on wanting to support broken hardware that turned out to
be too little too late anyway.
Post by John Dulaney
Post by Gordan Bobic
Post by Peter Robinson
Yes, because the instructions can be dealt with by the hypervisor
whether through emulation, or some other mechanism.
If it's going to run in emulation you might as well run it on
some highest end possible x86 hardware, it'll be slightly less
excruciatingly slow. And last I checked, that still had issues
with availability of kernels and architectures emulated.
Actually, with kvm, you get pretty much the same speed as native
aarch64 vms.
Running armv7hl VMs on ARMv8 hardware without ARM32 support?

(Ignoring for the moment the fact that even when running
the same guest and host architectures in a VM comes with a
much greater impact than "pretty much the same speed as native".)
Post by John Dulaney
Also, server grade aarch64 h/w will give you
pretty decent performance. I'm less sure about SBCs; they're
dependent on the SoC used.
I am quite aware. I am very pleased with my Gigabyte MP30-AR0
with an X-Gene and 128GB of RAM. But that I can buy off the
shelf right now, whereas standard form factor Seattles that
were promised a couple of years ago are still nowhere to be
seen (what I was referring to by too little too late above).
Post by John Dulaney
On the whole, 32 bit arm vms are going to have the same
performance on aarch64 as i686 on x86_64.
Are you saying that on hardware that is lacking the optional
ARM32 support a VM running ARM32 binaries is going to run with
approximately same performance as if it were running an
aarch64 OS? If so, how? I am specifically interested because
a part of this debate is hinging on the statement that 32-bit
guests are only supported in VMs rather than chroots because
on ARMv8 32-bit support is optional, but it is not at all
clear or demonstrated that running a 32-bit ARM guest on ARMv8
without 32-bit native support will not suffer the same hit as
running it in QEMU emulation on any other CPU architecture.

So either something magical happens that means that the
missing 32-bit support doesn't have to be fully emulated in
software, or the entire argument being made for VMs instead
of chroots is entirely erroneous.

Gordan
Jon Masters
2016-04-28 18:49:06 UTC
Permalink
Hi Gordan, Peter, all,
Post by Gordan Bobic
Post by John Dulaney
Post by Gordan Bobic
Post by Peter Robinson
Maybe that's something that CentOS have added (don't know, haven't
looked), RHELSA doesn't support it that I'm aware of and they're
definitely only 64K page size. The biggest change is in rpm and the
arch mappings there.
They might not support it, but it most certainly works. There are no
changes specific to this that I can find in CentOS. All I changed was
rebuilt the host kernel with 4KB pages and ARM32 support (still an
aarch64 kernel). C7 armv7hl guest is completely unmodified apart from
the /etc/rpm/platform being set explicitly.
Allow me to add a few thoughts. I have been working with the ARM vendors
(as well as the ARM Architecture Group) since before the architecture
was announced, and the issue of page size and 32-bit backward
compatibility came up in the earliest days. I am speaking from a Red Hat
perspective and NOT dictating what Fedora should or must do, but I do
strongly encourage Fedora not to make a change to something like the
page size simply to support a (relatively) small number of corner cases.
It is better to focus on the longer term trajectory, which the mobile
handset market demonstrates: the transition to 64-bit computing hardware
will be much faster than people thought, and we don't need to build a
legacy (we don't a 32-bit app store filled with things that can't easily
be rebuilt, and all of them have been anyway).

That doesn't mean we shouldn't love 32-bit ARM devices, which we do. In
fact, there will be many more 32-bit ARM devices over coming years. This
is especially true for IoT clients. But there will also be a large (and
rapidly growing) number of very high performance 64-bit systems. Many of
those will not have any 32-bit backward compatibility, or will disable
it in the interest of reducing the amount of validation work. Having an
entire separate several ISAs just for the fairly nonexistent field of
proprietary non-recompilable third party 32-bit apps doesn't really make
sense. Sure, running 32-bit via multilib is fun and all, but it's not
really something that is critical to using ARM systems.

The mandatory page sizes in the v8 architecture are 4K and 64K, with
various options around the number of bits used for address spaces, huge
pages (or ginormous pages), and contiguous hinting for smaller "huge"
pages. There is an option for 16K pages, but it is not mandatory. In the
server specifications, we don't compel Operating Systems to use 64K, but
everything is written with that explicitly in mind. By using 64K early
we ensure that it is possible to do so in a very clean way, and then if
(over the coming years) the deployment of sufficient real systems proves
that this was a premature decision, we still have 4K.

The choices for preferred page size were between 4K and 64K. In the
interest of transparency, I pushed from the RH side in the earliest days
(before public disclosure) to introduce an intentional break with the
past and support only 64K on ARMv8. I also asked a few of the chip
vendors not to implement 32-bit execution (and some of them have indeed
omitted it after we discussed the needs early on), and am aggressively
pushing for it to go away over time in all server parts. But there's
more to it than that. In the (very) many early conversations with
various performance folks, the feedback was that larger page sizes than
4K should generally be adopted for a new arch. Ideally that would have
been 16K (which other architectures than x86 went with also), but that
was optional. Optionally necessarily means "does not exist". My advice
when Red Hat began internal work on ARMv8 was to listen to the experts.

I am well aware of Linus's views on the topic and I have seen the rants
on G+ and elsewhere. I am completely willing to be wrong (there is not
enough data yet) over moving to 64K too soon and ultimately if it was
premature see things like RHELSA on the Red Hat side switch back to 4K.
Fedora is its own master, but I strongly encourage retaining the use of
64K granules at this time, and letting it play out without responding to
one or two corner use cases and changing course. There are very many
design optimizations that can be done when you have a 64K page size,
from the way one can optimize cache lookups and hardware page table
walker caches to the reduction of TLB pressure (though I accept that
huge pages are an answer for this under a 4K granule regime as well). It
would be nice to blaze a trail rather than take the safe default.

My own opinion is that (in the longer term, beginning with server) we
should not have a 32-bit legacy of the kind that x86 has to deal with
forever. We can use virtualization (and later, if it really comes to it,
containers running 32-bit applications with 4K pages exposed to them -
an implementation would be a bit like "Clear" containers today) to run
32-bit applications on 64-bit without having to do nasty hacks (such as
multilib) and reduce any potential for confusion on the part of users
(see also RasPi 3 as an example). It is still early enough in the
evolution of general purpose aarch64 to try this, and have the pragmatic
fallback of retreating to 4K if needed. The same approach of running
under virtualization or within a container model equally applies to
ILP32, which is another 32-bit ABI that some folks like, in that a third
party group is welcome to do all of the lifting required.
Post by Gordan Bobic
Post by John Dulaney
Post by Gordan Bobic
The main point being that the original assertion that making this
work would require rpm, yum, packagekit, mock and other code changes
doesn't seem to be correct based on empirical evidence.
It may work with rpm, but, as per the original post, dnf does not
support it, and dnf should not support it as long as Fedora
does not support a 32 bit userspace on aarch64.
It's a lot of lifting to support validating a 32-bit userspace for a
brand new architecture that doesn't need to have that legacy. Sure, it's
convenient, and you're obviously more than capable of building a kernel
with a 4K page size and doing whatever you need for yourself. That's the
beauty of open source. It lets you have a 32-bit userspace on a 64-bit
device without needing to support that for everyone else.
Post by Gordan Bobic
2) Nobody has yet pointed at ARM's own documentation (I did ask
earlier) that says that 4KB memory page support is optional
rather than mandatory.
Nobody said this was a requirement. I believe you raised this as some
kind of logical fallacy to reinforce the position that you have taken.
Post by Gordan Bobic
And if 4KB support is in fact mandatory, then arguably the
decision to opt for 64KB for the sake of supporting Seattle was
based on wanting to support broken hardware that turned out to
be too little too late anyway.
Seattle was incredibly well designed by a very talented team of
engineers at AMD, who know how to make servers. They did everything
fully in conformance with the specifications we coauthored for v8. It is
true that everyone would have liked to see low cost mass market Seattle
hardware in wide distribution. For the record, last week, I received one
of the preproduction "Cello" boards ($300) for which a few kinks are
being resolved before it will go into mass production soon.

<snip>
Post by Gordan Bobic
So either something magical happens that means that the
missing 32-bit support doesn't have to be fully emulated in
software, or the entire argument being made for VMs instead
of chroots is entirely erroneous.
Nobody said there wasn't a performance hit using virtualization.
Depending upon how you measure it, it's about 3-10% overhead or somesuch
to use KVM (or Xen for that matter) on ARMv8. That doesn't make it an
erroneous argument that running a VM is an easier exercise in
distribution validation and support: you build one 64-bit distro, you
build one 32-bit distro. You don't have to support a mixture. In a few
years, we'll all be using 64-bit ARM SoCs in every $10 device, only
running native 64-bit ARMv8 code, and wondering why it was ever an
issue that we might want multilib. We'll have $1-$2 IoT widgets that are
32-bit, but that's another matter. There's no legacy today, so let's
concentrate on not building one and learning from history.

Jon.
--
Computer Architect | Sent from my Fedora powered laptop
Gordan Bobic
2016-04-28 21:00:02 UTC
Permalink
Post by Jon Masters
Hi Gordan, Peter, all,
Post by Gordan Bobic
Post by John Dulaney
Post by Gordan Bobic
Post by Peter Robinson
Maybe that's something that CentOS have added (don't know, haven't
looked), RHELSA doesn't support it that I'm aware of and they're
definitely only 64K page size. The biggest change is in rpm and the
arch mappings there.
They might not support it, but it most certainly works. There are no
changes specific to this that I can find in CentOS. All I changed was
rebuilt the host kernel with 4KB pages and ARM32 support (still an
aarch64 kernel). C7 armv7hl guest is completely unmodified apart from
the /etc/rpm/platform being set explicitly.
First of all, Jon, thank you for your thoughts on this matter.
Post by Jon Masters
Allow me to add a few thoughts. I have been working with the ARM vendors
(as well as the ARM Architecture Group) since before the architecture
was announced, and the issue of page size and 32-bit backward
compatibility came up in the earliest days. I am speaking from a Red Hat
perspective and NOT dictating what Fedora should or must do, but I do
strongly encourage Fedora not to make a change to something like the
page size simply to support a (relatively) small number of corner cases.
IMO, the issue of backward compatibility is completely secondary to
the issue of efficiency of memory fragmentation/occupancy when it comes
to 64KB pages. And that isn't a corner case, it is the overwhelmingly
primary case.
Post by Jon Masters
It is better to focus on the longer term trajectory, which the mobile
handset market demonstrates: the transition to 64-bit computing hardware
will be much faster than people thought, and we don't need to build a
legacy (we don't a 32-bit app store filled with things that can't easily
be rebuilt, and all of them have been anyway).
I think going off on a tangent about the mobile devices needlessly
muddies the water here. 64-bitness is completely independent of memory
page size and pros and cons of different sizes. If anything on mobile
devices where memory is scarcer, smaller pages will result in lower
fragmentation and less wasted memory.
Post by Jon Masters
That doesn't mean we shouldn't love 32-bit ARM devices, which we do. In
fact, there will be many more 32-bit ARM devices over coming years. This
is especially true for IoT clients. But there will also be a large (and
rapidly growing) number of very high performance 64-bit systems. Many of
those will not have any 32-bit backward compatibility, or will disable
it in the interest of reducing the amount of validation work. Having an
entire separate several ISAs just for the fairly nonexistent field of
proprietary non-recompilable third party 32-bit apps doesn't really make
sense. Sure, running 32-bit via multilib is fun and all, but it's not
really something that is critical to using ARM systems.
Except where there's no choice, such as closed source applications
(Plex comes to mind) or libraries without appropriate ARM64
implementation
such as Mono. I'm sure pure aarch64 will be supported by it all at
some point, but the problem is real today.

But OK, for the sake of this discussion let's completely ignore the
32-bit support to simplify things.
Post by Jon Masters
The mandatory page sizes in the v8 architecture are 4K and 64K, with
various options around the number of bits used for address spaces, huge
pages (or ginormous pages), and contiguous hinting for smaller "huge"
pages. There is an option for 16K pages, but it is not mandatory. In the
server specifications, we don't compel Operating Systems to use 64K, but
everything is written with that explicitly in mind. By using 64K early
we ensure that it is possible to do so in a very clean way, and then if
(over the coming years) the deployment of sufficient real systems proves
that this was a premature decision, we still have 4K.
The real question is how much code will bit-rot due to not being
tested with 4KB pages, and how difficult it will be to subsequently
push through patches all the way from upstream projects down to the
level of the distros we are all fans of here. And even then, the
consequence will be software that is broken for anyone who has a
need to do anything but the straight-and-narrow case that the
distro maintainers envisaged.
Post by Jon Masters
The choices for preferred page size were between 4K and 64K. In the
interest of transparency, I pushed from the RH side in the earliest days
(before public disclosure) to introduce an intentional break with the
past and support only 64K on ARMv8.
Breaking with the past is all well and good, but I am particularly
interested in the technical reasons for doing so. What benefits exceed
the drawbacks of the significant increase in fragmentation in the
general case (apart from databases - and for that we have huge pages
regardless).
Post by Jon Masters
I also asked a few of the chip
vendors not to implement 32-bit execution (and some of them have indeed
omitted it after we discussed the needs early on), and am aggressively
pushing for it to go away over time in all server parts. But there's
more to it than that. In the (very) many early conversations with
various performance folks, the feedback was that larger page sizes than
4K should generally be adopted for a new arch. Ideally that would have
been 16K (which other architectures than x86 went with also), but that
was optional. Optionally necessarily means "does not exist". My advice
when Red Hat began internal work on ARMv8 was to listen to the experts.
Linus is not an expert?
Post by Jon Masters
I am well aware of Linus's views on the topic and I have seen the rants
on G+ and elsewhere. I am completely willing to be wrong (there is not
enough data yet) over moving to 64K too soon and ultimately if it was
premature see things like RHELSA on the Red Hat side switch back to 4K.
My main concern is around how much code elsewhere will rot and need
attention should this ever happen.
Post by Jon Masters
Fedora is its own master, but I strongly encourage retaining the use of
64K granules at this time, and letting it play out without responding to
one or two corner use cases and changing course. There are very many
design optimizations that can be done when you have a 64K page size,
from the way one can optimize cache lookups and hardware page table
walker caches to the reduction of TLB pressure (though I accept that
huge pages are an answer for this under a 4K granule regime as well). It
would be nice to blaze a trail rather than take the safe default.
While I agree with the sentiment, I think something like this is
better decided on carefully considered merit assessed through
empirical measurement.
Post by Jon Masters
My own opinion is that (in the longer term, beginning with server) we
should not have a 32-bit legacy of the kind that x86 has to deal with
forever. We can use virtualization (and later, if it really comes to it,
containers running 32-bit applications with 4K pages exposed to them -
an implementation would be a bit like "Clear" containers today) to run
32-bit applications on 64-bit without having to do nasty hacks (such as
multilib) and reduce any potential for confusion on the part of users
(see also RasPi 3 as an example). It is still early enough in the
evolution of general purpose aarch64 to try this, and have the
pragmatic
fallback of retreating to 4K if needed. The same approach of running
under virtualization or within a container model equally applies to
ILP32, which is another 32-bit ABI that some folks like, in that a third
party group is welcome to do all of the lifting required.
This again mashes 32-bit support with page size. If there is no
32-bit support in the CPU, I am reasonably confident that QEMU
emulation if it will be unusably slow for just about any serious
use case (you might as well run QEMU emulation of ARM32 on x86
in that case and not even touch upon aarch64).
Post by Jon Masters
Post by Gordan Bobic
Post by John Dulaney
Post by Gordan Bobic
The main point being that the original assertion that making this
work would require rpm, yum, packagekit, mock and other code changes
doesn't seem to be correct based on empirical evidence.
It may work with rpm, but, as per the original post, dnf does not
support it, and dnf should not support it as long as Fedora
does not support a 32 bit userspace on aarch64.
It's a lot of lifting to support validating a 32-bit userspace for a
brand new architecture that doesn't need to have that legacy. Sure, it's
convenient, and you're obviously more than capable of building a kernel
with a 4K page size and doing whatever you need for yourself. That's the
beauty of open source. It lets you have a 32-bit userspace on a 64-bit
device without needing to support that for everyone else.
Sure, that is the beauty of open source. But will Fedora accept
patches for fixing things that break during such independent
validation? My experience with Fedora patch submissions has
been very poor in the past - the typical outcome being that the
bug will sit and rot in bugzilla until the distro goes EOL and
the bug zapper closes it. That is hugely demotivating.
Post by Jon Masters
Post by Gordan Bobic
2) Nobody has yet pointed at ARM's own documentation (I did ask
earlier) that says that 4KB memory page support is optional
rather than mandatory.
Nobody said this was a requirement. I believe you raised this as some
kind of logical fallacy to reinforce the position that you have taken.
I'm afraid you got that backwards. I believe it was Peter that
said that Seattle didn't support 4KB pages, seemingly implied
Post by Jon Masters
Post by Gordan Bobic
And if 4KB support is in fact mandatory, then arguably the
decision to opt for 64KB for the sake of supporting Seattle was
based on wanting to support broken hardware that turned out to
be too little too late anyway.
Seattle was incredibly well designed by a very talented team of
engineers at AMD, who know how to make servers. They did everything
fully in conformance with the specifications we coauthored for v8. It is
true that everyone would have liked to see low cost mass market Seattle
hardware in wide distribution. For the record, last week, I received one
of the preproduction "Cello" boards ($300) for which a few kinks are
being resolved before it will go into mass production soon.
If Seattle does in fact support the spec mandatory 4KB memory
pages, then that specific SoC is no longer relevant to this
thread.
Post by Jon Masters
Post by Gordan Bobic
So either something magical happens that means that the
missing 32-bit support doesn't have to be fully emulated in
software, or the entire argument being made for VMs instead
of chroots is entirely erroneous.
Nobody said there wasn't a performance hit using virtualization.
Depending upon how you measure it, it's about 3-10% overhead or somesuch
to use KVM (or Xen for that matter) on ARMv8. That doesn't make it an
erroneous argument that running a VM is an easier exercise in
distribution validation and support: you build one 64-bit distro, you
build one 32-bit distro. You don't have to support a mixture. In a few
years, we'll all be using 64-bit ARM SoCs in every $10 device, only
running native 64-bit ARMv8 code, and wondering why it was ever an
issue that we might want multilib. We'll have $1-$2 IoT widgets that are
32-bit, but that's another matter. There's no legacy today, so let's
concentrate on not building one and learning from history.
I am talking about the specific case of using
armv7hl (or armv5tel) VMs on aarch64 hardware that doesn't
implement 32-bit ARM support (and you suggest above that not
supporting ARM32 on ARM64 hardware may be a good thing). But that
also means that there is no advantage to running an armv7hl distro
on aarch64 hardware without legacy support, so the whole VM notion
is out of scope since it isn't virtualization, it is emulation. And
at that point there is no advantage to running an emulator on
aarch64 machine over an x86-64 machine.

The point being that if there's no legacy 32-bit support in
hardware, it's not going to be workable anyway. If there is
legacy 32-bit support in hardware, running it it in a chroot
or in a docker container might not be outright supported (I
get it, there are only so many maintainers and testers) but
at the very least external, user provided validation, patches
and questions and bug reports should be treated with something
other than contempt.

Gordan
Jon Masters
2016-04-28 21:26:59 UTC
Permalink
Hi Gordan,
Post by Gordan Bobic
First of all, Jon, thank you for your thoughts on this matter.
No problem :)
Post by Gordan Bobic
Post by Jon Masters
Allow me to add a few thoughts. I have been working with the ARM vendors
(as well as the ARM Architecture Group) since before the architecture
was announced, and the issue of page size and 32-bit backward
compatibility came up in the earliest days. I am speaking from a Red Hat
perspective and NOT dictating what Fedora should or must do, but I do
strongly encourage Fedora not to make a change to something like the
page size simply to support a (relatively) small number of corner cases.
IMO, the issue of backward compatibility is completely secondary to
the issue of efficiency of memory fragmentation/occupancy when it comes
to 64KB pages. And that isn't a corner case, it is the overwhelmingly
primary case.
Let's keep to the memory discussion then, I agree. On the fragmentation
argument, I do agree this is an area where server/non-server uses
certainly clash. It might well be that we later decide in Fedora that 4K
is the right size once there are more 64-bit client devices.
Post by Gordan Bobic
Post by Jon Masters
Having an entire separate several ISAs just for the fairly nonexistent field of
proprietary non-recompilable third party 32-bit apps doesn't really make
sense. Sure, running 32-bit via multilib is fun and all, but it's not
really something that is critical to using ARM systems.
Except where there's no choice, such as closed source applications
(Plex comes to mind) or libraries without appropriate ARM64 implementation
such as Mono. I'm sure pure aarch64 will be supported by it all at
some point, but the problem is real today.
It's definitely true that there are some applications that aren't yet
ported to ARMv8, though that list is fairly small (compared with IA32).
Post by Gordan Bobic
But OK, for the sake of this discussion let's completely ignore the
32-bit support to simplify things.
OK :)
Post by Gordan Bobic
Post by Jon Masters
The mandatory page sizes in the v8 architecture are 4K and 64K, with
various options around the number of bits used for address spaces, huge
pages (or ginormous pages), and contiguous hinting for smaller "huge"
pages. There is an option for 16K pages, but it is not mandatory. In the
server specifications, we don't compel Operating Systems to use 64K, but
everything is written with that explicitly in mind. By using 64K early
we ensure that it is possible to do so in a very clean way, and then if
(over the coming years) the deployment of sufficient real systems proves
that this was a premature decision, we still have 4K.
The real question is how much code will bit-rot due to not being
tested with 4KB pages
With respect, I think it's the other way around. We have another whole
architecture targeting 4K pages by default, and (regretfully perhaps,
though that's a personal opinion) it's a pretty popular choice that many
people are using in Fedora today. So I don't see any situation in which
4K bitrots over 64K. I did see the opposite being very likely if we
didn't start out with 64K as the baseline going in on day one.
Post by Gordan Bobic
Post by Jon Masters
I also asked a few of the chip
vendors not to implement 32-bit execution (and some of them have indeed
omitted it after we discussed the needs early on), and am aggressively
pushing for it to go away over time in all server parts. But there's
more to it than that. In the (very) many early conversations with
various performance folks, the feedback was that larger page sizes than
4K should generally be adopted for a new arch. Ideally that would have
been 16K (which other architectures than x86 went with also), but that
was optional. Optionally necessarily means "does not exist". My advice
when Red Hat began internal work on ARMv8 was to listen to the experts.
Linus is not an expert?
Note that I never said he isn't an expert. He's one of the smartest guys
around, but he's not always right 100% of the time. Folks who run
performance numbers were consulted about the merits of 64K (as were a
number of chip architects) and they said that was the way to go. We can
always later decide (once there's a server market running fully) that
this was premature and change to 4K, but it's very hard to go the other
way around later if we settle for 4K on day one. The reason is 4K works
great out of the box as it's got 30 years of history on that other arch,
but for 64K we've only POWER to call on, and its userbase generally
aren't stressing the same workloads as on 64-bit ARM. Sometimes they
are, and that's been helpful with obscure things like emacs crashing due
to a page size assumption or two on arrow presses.
Post by Gordan Bobic
Post by Jon Masters
I am well aware of Linus's views on the topic and I have seen the rants
on G+ and elsewhere. I am completely willing to be wrong (there is not
enough data yet) over moving to 64K too soon and ultimately if it was
premature see things like RHELSA on the Red Hat side switch back to 4K.
My main concern is around how much code elsewhere will rot and need
attention should this ever happen.
I think, once again, that any concern over 4K being a well supported
page size is perhaps made moot by the billions of x86 systems out there
using that size. Most of the time, it's not the case that applications
have assembly code level changes required for 64K. Sure, the toolchain
will emit optimized code and it will use adrp and other stuff in v8 to
reference pages and offsets, but that compiler code works well. It's not
the piece that's got any potential for issue. It's the higher level C
code that possibly has assumptions to iron out on a 64K base vs 4K.
Post by Gordan Bobic
Post by Jon Masters
Fedora is its own master, but I strongly encourage retaining the use of
64K granules at this time, and letting it play out without responding to
one or two corner use cases and changing course. There are very many
design optimizations that can be done when you have a 64K page size,
from the way one can optimize cache lookups and hardware page table
walker caches to the reduction of TLB pressure (though I accept that
huge pages are an answer for this under a 4K granule regime as well). It
would be nice to blaze a trail rather than take the safe default.
While I agree with the sentiment, I think something like this is
better decided on carefully considered merit assessed through
empirical measurement.
Sure. We had to start with something. Folks now have something that they
can use to run numbers on. BUT note that the kind of 64-bit hw that is
needed to really answer these questions is only just coming. Again, if
64K was a wrong choice, we can change it. It's only a mistake if we
always dogmatically stick to principle in the face of evidence to the
contrary. If the evidence says "dude, 64K was at best premature and
Linus was right", then that's totally cool with me. We'll meanwhile have
a codebase that is even more portable (different arch/pagesz).
Post by Gordan Bobic
Post by Jon Masters
My own opinion is that (in the longer term, beginning with server) we
should not have a 32-bit legacy of the kind that x86 has to deal with
forever. We can use virtualization (and later, if it really comes to it,
containers running 32-bit applications with 4K pages exposed to them -
an implementation would be a bit like "Clear" containers today) to run
32-bit applications on 64-bit without having to do nasty hacks (such as
multilib) and reduce any potential for confusion on the part of users
(see also RasPi 3 as an example). It is still early enough in the
evolution of general purpose aarch64 to try this, and have the pragmatic
fallback of retreating to 4K if needed. The same approach of running
under virtualization or within a container model equally applies to
ILP32, which is another 32-bit ABI that some folks like, in that a third
party group is welcome to do all of the lifting required.
This again mashes 32-bit support with page size. If there is no
32-bit support in the CPU, I am reasonably confident that QEMU
emulation if it will be unusably slow for just about any serious
use case (you might as well run QEMU emulation of ARM32 on x86
in that case and not even touch upon aarch64).
Point noted. If we keep the conversation purely to the relative merits
of 64K vs 4K page size upon memory use overhead, fragmentation, and the
like, then the previous comment about getting numbers stands. This is
absolutely something we intend to gather within the perf team inside Red
Hat (and share in some form) as more hardware arrives that can be
realistically used to quantify the value. You're welcome to also run
numbers and show that there's a definite case for 4K over 64K.
Post by Gordan Bobic
Post by Jon Masters
Post by Gordan Bobic
2) Nobody has yet pointed at ARM's own documentation (I did ask
earlier) that says that 4KB memory page support is optional
rather than mandatory.
Nobody said this was a requirement. I believe you raised this as some
kind of logical fallacy to reinforce the position that you have taken.
Apologies if this wasn't you.
Post by Gordan Bobic
I'm afraid you got that backwards. I believe it was Peter that
said that Seattle didn't support 4KB pages, seemingly implied
Seattle is only tested (by us) using 64K pages, the hardware supports 4K
pages at an architectural level. I get your argument that this could
well mean that if we later drop to 4K pages there could be platforms
that have issues. I would counter that I know of at least one other
distribution that's fairly popular also which is building with 4K pages,
and is being used on some platforms, so the number of platforms that
won't be able to handle 4K is probably quite limited. The variety of
options out there between distros is a *good* thing for validation.
Post by Gordan Bobic
If Seattle does in fact support the spec mandatory 4KB memory
pages, then that specific SoC is no longer relevant to this
thread.
Then we can move on from that.

Thanks,

Jon.
--
Computer Architect | Sent from my Fedora powered laptop
Richard W.M. Jones
2016-04-28 14:40:43 UTC
Permalink
Post by Chanho Park
I want to use the armhf fedora rootfs on the aarch64 bit kernel.
When I ran the dnf command on the armhf image with aarch64 kernel, the
dnf command was failed with below error.
Leaving aside the question of kernel page size, what's your actual use
case and could you use virtualization for it?

Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-p2v converts physical machines to virtual machines. Boot with a
live CD or over the network (PXE) and turn machines into KVM guests.
http://libguestfs.org/virt-v2v
Gordan Bobic
2016-04-29 08:34:30 UTC
Permalink
Post by Jon Masters
Hi Gordan,
Post by Gordan Bobic
First of all, Jon, thank you for your thoughts on this matter.
No problem :)
Post by Gordan Bobic
Post by Jon Masters
Allow me to add a few thoughts. I have been working with the ARM
vendors
(as well as the ARM Architecture Group) since before the architecture
was announced, and the issue of page size and 32-bit backward
compatibility came up in the earliest days. I am speaking from a Red
Hat
perspective and NOT dictating what Fedora should or must do, but I do
strongly encourage Fedora not to make a change to something like the
page size simply to support a (relatively) small number of corner
cases.
IMO, the issue of backward compatibility is completely secondary to
the issue of efficiency of memory fragmentation/occupancy when it
comes
to 64KB pages. And that isn't a corner case, it is the overwhelmingly
primary case.
Let's keep to the memory discussion then, I agree. On the fragmentation
argument, I do agree this is an area where server/non-server uses
certainly clash. It might well be that we later decide in Fedora that
4K
is the right size once there are more 64-bit client devices.
As an additional factoid to throw into this, one obvious case where
large pages can be beneficial is databases. But speaking as a
database guy who measured the positive impact of using huge pages
on MySQL, I can confirm that the performance improvement arising
from putting the buffer pool into 1MB huge pages instead of 4KB
pages is in the 3% range. And that is when using 1MB pages instead
of 4KB pages. While I haven't measured it, it doesn't seem
unreasonable to extrapolate the following:

1) 4KB -> 64KB pages will make less difference than 4KB -> 1MB
pages in this case this use case that is supposed to be the
prime example where larger memory pages make a measurable difference.

2) Regardless of whether we use 4KB or 64KB standard pages,
we can still use huge pages anyway, further minimizing the
usefulness of the 64KB compromise.
Post by Jon Masters
Post by Gordan Bobic
Post by Jon Masters
Having an entire separate several ISAs just for the fairly
nonexistent field of
proprietary non-recompilable third party 32-bit apps doesn't really
make
sense. Sure, running 32-bit via multilib is fun and all, but it's not
really something that is critical to using ARM systems.
Except where there's no choice, such as closed source applications
(Plex comes to mind) or libraries without appropriate ARM64
implementation
such as Mono. I'm sure pure aarch64 will be supported by it all at
some point, but the problem is real today.
It's definitely true that there are some applications that aren't yet
ported to ARMv8, though that list is fairly small (compared with IA32).
Post by Gordan Bobic
But OK, for the sake of this discussion let's completely ignore the
32-bit support to simplify things.
OK :)
Post by Gordan Bobic
Post by Jon Masters
The mandatory page sizes in the v8 architecture are 4K and 64K, with
various options around the number of bits used for address spaces,
huge
pages (or ginormous pages), and contiguous hinting for smaller "huge"
pages. There is an option for 16K pages, but it is not mandatory. In
the
server specifications, we don't compel Operating Systems to use 64K,
but
everything is written with that explicitly in mind. By using 64K
early
we ensure that it is possible to do so in a very clean way, and then
if
(over the coming years) the deployment of sufficient real systems
proves
that this was a premature decision, we still have 4K.
The real question is how much code will bit-rot due to not being
tested with 4KB pages
With respect, I think it's the other way around. We have another whole
architecture targeting 4K pages by default, and (regretfully perhaps,
though that's a personal opinion) it's a pretty popular choice that
many
people are using in Fedora today. So I don't see any situation in which
4K bitrots over 64K. I did see the opposite being very likely if we
didn't start out with 64K as the baseline going in on day one.
Perhaps. Hopefully this won't be an issue at least as long as Fedora
ships both 32-bit and 64-bit ARM distros.
Post by Jon Masters
Post by Gordan Bobic
Post by Jon Masters
I also asked a few of the chip
vendors not to implement 32-bit execution (and some of them have
indeed
omitted it after we discussed the needs early on), and am
aggressively
pushing for it to go away over time in all server parts. But there's
more to it than that. In the (very) many early conversations with
various performance folks, the feedback was that larger page sizes
than
4K should generally be adopted for a new arch. Ideally that would
have
been 16K (which other architectures than x86 went with also), but
that
was optional. Optionally necessarily means "does not exist". My
advice
when Red Hat began internal work on ARMv8 was to listen to the
experts.
Linus is not an expert?
Note that I never said he isn't an expert. He's one of the smartest
guys
around, but he's not always right 100% of the time. Folks who run
performance numbers were consulted about the merits of 64K (as were a
number of chip architects) and they said that was the way to go. We can
always later decide (once there's a server market running fully) that
this was premature and change to 4K, but it's very hard to go the other
way around later if we settle for 4K on day one. The reason is 4K works
great out of the box as it's got 30 years of history on that other
arch,
but for 64K we've only POWER to call on, and its userbase generally
aren't stressing the same workloads as on 64-bit ARM. Sometimes they
are, and that's been helpful with obscure things like emacs crashing
due
to a page size assumption or two on arrow presses.
Indeed, but the POWER hardware also tends to be used in rather niche
cases, and probably more often with large databases than x86 or ARM.
And as I mentioned above, even on workloads like that, the page size
doesn't yield ground breaking performance improvements. Certainly
nowhere
nearly enough improvement to offset the penalty of, say, the hypervisor
overhead.
Post by Jon Masters
Post by Gordan Bobic
Post by Jon Masters
I am well aware of Linus's views on the topic and I have seen the
rants
on G+ and elsewhere. I am completely willing to be wrong (there is
not
enough data yet) over moving to 64K too soon and ultimately if it was
premature see things like RHELSA on the Red Hat side switch back to
4K.
My main concern is around how much code elsewhere will rot and need
attention should this ever happen.
I think, once again, that any concern over 4K being a well supported
page size is perhaps made moot by the billions of x86 systems out there
using that size. Most of the time, it's not the case that applications
have assembly code level changes required for 64K. Sure, the toolchain
will emit optimized code and it will use adrp and other stuff in v8 to
reference pages and offsets, but that compiler code works well. It's
not
the piece that's got any potential for issue. It's the higher level C
code that possibly has assumptions to iron out on a 64K base vs 4K.
Indeed, the toolchain output is a concern - specifically anything that
would cause aarch64 binaries to run with 64KB kernels but not 4KB ones.
But I concede that at this stage such bugs are purely theoretical. I
have certainly not (yet?) found anything in an aarch64 distro that
breaks when I replace the kernel with one that uses 4KB pages.
Post by Jon Masters
Post by Gordan Bobic
Post by Jon Masters
Fedora is its own master, but I strongly encourage retaining the use
of
64K granules at this time, and letting it play out without responding
to
one or two corner use cases and changing course. There are very many
design optimizations that can be done when you have a 64K page size,
from the way one can optimize cache lookups and hardware page table
walker caches to the reduction of TLB pressure (though I accept that
huge pages are an answer for this under a 4K granule regime as well).
It
would be nice to blaze a trail rather than take the safe default.
While I agree with the sentiment, I think something like this is
better decided on carefully considered merit assessed through
empirical measurement.
Sure. We had to start with something. Folks now have something that
they
can use to run numbers on. BUT note that the kind of 64-bit hw that is
needed to really answer these questions is only just coming. Again, if
64K was a wrong choice, we can change it. It's only a mistake if we
always dogmatically stick to principle in the face of evidence to the
contrary. If the evidence says "dude, 64K was at best premature and
Linus was right", then that's totally cool with me. We'll meanwhile
have
a codebase that is even more portable (different arch/pagesz).
Fair enough. I guess the next step would be to actually run some
numbers.
Post by Jon Masters
Post by Gordan Bobic
Post by Jon Masters
My own opinion is that (in the longer term, beginning with server) we
should not have a 32-bit legacy of the kind that x86 has to deal with
forever. We can use virtualization (and later, if it really comes to
it,
containers running 32-bit applications with 4K pages exposed to them
-
an implementation would be a bit like "Clear" containers today) to
run
32-bit applications on 64-bit without having to do nasty hacks (such
as
multilib) and reduce any potential for confusion on the part of users
(see also RasPi 3 as an example). It is still early enough in the
evolution of general purpose aarch64 to try this, and have the
pragmatic
fallback of retreating to 4K if needed. The same approach of running
under virtualization or within a container model equally applies to
ILP32, which is another 32-bit ABI that some folks like, in that a
third
party group is welcome to do all of the lifting required.
This again mashes 32-bit support with page size. If there is no
32-bit support in the CPU, I am reasonably confident that QEMU
emulation if it will be unusably slow for just about any serious
use case (you might as well run QEMU emulation of ARM32 on x86
in that case and not even touch upon aarch64).
Point noted. If we keep the conversation purely to the relative merits
of 64K vs 4K page size upon memory use overhead, fragmentation, and the
like, then the previous comment about getting numbers stands. This is
absolutely something we intend to gather within the perf team inside
Red
Hat (and share in some form) as more hardware arrives that can be
realistically used to quantify the value. You're welcome to also run
numbers and show that there's a definite case for 4K over 64K.
Indeed I intend to, but in most cases getting real world data tu run
such numbers is non-trivial. Any real data large enough to produce
meaningful results tends to belong to clients, who by and large
run on x86 only. So right now the best I can offer is experience
that on database workloads huge pages outperform 4KB pages by very
low single figure % points.

It is therefore questionable how much difference using 64KB non-huge
pages might actually make in terms of performance, while increases
in memory fragmentation are reasonably well understood.

It strikes me that this is something better tested in a lab
rather than guinea-pigging the entire user base, most of
whom aren't fortunate enough to have machines with tons of
RAM to not care.

Gordan

Loading...