DDR4 Palladium Memory Model Guide
DDR4 Palladium Memory Model Guide
DDR4
Palladium Memory Model
User Guide
Trademarks: Trademarks and service marks of Cadence Design Systems, Inc. contained in this
document are attributed to Cadence with the appropriate symbol. For queries regarding
Cadence’s trademarks, contact the corporate legal department at the address shown above or
call 800.862.4522. All other trademarks are the property of their respective holders.
Restricted Permission: This publication is protected by copyright law and international treaties
and contains trade secrets and proprietary information owned by Cadence. Unauthorized
reproduction or distribution of this publication, or any portion of it, may result in civil and criminal
penalties. Except as specified in this permission statement, this publication may not be copied,
reproduced, modified, published, uploaded, posted, transmitted, or distributed in any way,
without prior written permission from Cadence. Unless otherwise agreed to by Cadence in
writing, this statement grants Cadence customers permission to print one (1) hard copy of this
publication subject to the following conditions:
1. The publication may be used only in accordance with a written agreement between
Cadence and its customer.
2. The publication may not be modified in any way.
3. Any authorized copy of the publication or portion thereof must include all original
copyright, trademark, and other proprietary notices and this permission statement.
4. The information contained in this document cannot be used in the development of
like products or software, whether for internal or external use, and shall not be used
for the benefit of any other party, whether or not for consideration.
Disclaimer: Information in this publication is subject to change without notice and does not
represent a commitment on the part of Cadence. Except as may be explicitly set forth in such
agreement, Cadence does not make, and expressly disclaims, any representations or
warranties as to the completeness, accuracy or usefulness of the information contained in this
document. Cadence does not warrant that use of such information will not infringe any third
party rights, nor does Cadence assume any liability for damages or costs of any kind that may
result from use of such information.
Contents
General Information
The Cadence Memory Model Portfolio provides memory device models for the Cadence Palladium
XP, Palladium XP II and Palladium Z1 series systems. Optimizing the acceleration and/or emulation
flow on these platforms for MMP memory models may require information outside the scope of
the MMP user guides and related MMP documentation.
For basic information regarding emulation and acceleration, please refer to the following
documents:
The model is available in several configurations based on generic configurations from the
JEDEC spec. As real devices become available from vendors those will be added to the
catalog.
Different sizes from 2Gb up to 32Gb are available, please consult the memory model
catalog for the current available list.
DIMM models are derived from their base part by expanding the data width to 64 or 72 bits
rather than instantiating multiple base parts as in real devices. This minimizes the number
of memory ports in order to improve emulation performance. Current DIMM models
require only one clock input. Please use CK0 if more than one CK is available.
The different levels give an overall indication of the amount of testing, level of quality and
feature availability in the model. For details on supported features check the User Guide
for that particular model family.
There are three release levels for models in the MMP release.
Access to Initial and Emerging Release versions of the models will require a Beta
Agreement to be signed before the model can be delivered.
3. Configurations
3.1. DDR4 SDRAM Addressing
The following table lists the possible configurations. Not all configurations are available
from all vendors. Please consult the appropriate vendor site for details on the parts they
offer.
Data
X4 X8 X16
Width
Memory Bank 4 4 2
Size Groups
Banks
within a 4 4 4
group
Row
2Gb A[14:0] A[13:0] A[13:0]
Address
Column
A[9:0] A[9:0] A[9:0]
Address
Row
4Gb A[15:0] A[14:0] A[14:0]
Address
Column
A[9:0] A[9:0] A[9:0]
Address
Row
8Gb A[16:0] A[15:0] A[15:0]
Address
Column
A[9:0] A[9:0] A[9:0]
Address
Row
16Gb A[17:0] A[16:0] A[16:0]
Address
Column
A[9:0] A[9:0] A[9:0]
Address
NOTE : Per specification, a portion of the SDRAM address bus, in other words a slice of
the col/row address width, is multiplexed with non-address signals. For example, in A[17:0]
the following functions are shared:
A[10] serves as the auto pre-charge bit.
A[12] is BC_n
A[14] is WE_n
A[15] is CAS_n
A[16] is RAS_n
A[13:0] inputs are used for ADDR. This allocation is reflected in wrappers and core model.
Please see the Input/Output Functional Description in the specification for additional detail.
Data
X8X2
Width
Memory Bank 4
Size Groups
Banks
within a 4
group
Row
8Gb A[14:0]
Address
Column
A[9:0]
Address
Row
16Gb A[15:0]
Address
Column
A[9:0]
Address
Row
32Gb A[16:0]
Address
Column
A[9:0]
Address
The following table provides some information about exposed localparams that are NOT user
adjustable. On rare occasion the user may find one of these localparam needs adjusting for
their configuration. If this case arises, please contact Cadence emulation or MMP support.
As described in the sections “IXCOM Compilation” and “Large Memory Support,“ each
MMP DDR4 or DDR4 DIMM model that exceeds 30 bits of address width needs to
incorporate the multiple core memory array generator (mmp_gen_mem.vp) into the
memory build. In these cases, the file mmp_gen_mem.vp automatically defines Verilog
macro MMP_LG_MEM_BITS with a value of ‘30’ by default. This default value may be
changed within the file mmp_gen_mem.vp or overridden on the command with the define
option. Users of DDR4 and DDR4 DIMM memories that are smaller than the 30 bit
address width should NOT need to be aware of or modify this file or its Verilog macro
value.
The Verilog macro DQ_MAPPING is defined by default in the ddr4_wide.vp file for all
DDR4 DIMMS. The user should not need to be aware of or modify this Verilog macro.
7. Address Mapping
The array of the DDR4 model is mapped into the internal memory of the Palladium
system. This array is a single two dimensional array. The mapping of bank group, bank,
row and column addresses to the internal model array is as follows:
This information is required if the memory needs to be preloaded with user data.
8. Register Definitions
In the DDR4 there are seven Mode Registers. The Palladium DDR4 model implements all
seven Mode Registers. However not all features are supported in the model.
BG BG0, 17 13 12 1 1 9 8
7 6 5 4 3 2 1 0
1 BA 1 0
RF MR RF WR CAS Bur
DLL Test CAS CAS Burst
U Select U and Laten WR and st
Res Mod Latenc Laten Lengt
RTP cy RTP Typ
et e y cy h
e
Bit 7 Mode
0 Normal Supported
1 Test Not supported
0 0 1 1 1 16
0 1 0 0 0 18
0 1 0 0 1 20
0 1 0 1 0 22
0 1 0 1 1 24
0 1 1 0 0 23
0 1 1 0 1 17
0 1 1 1 0 19
0 1 1 1 1 21
1 0 0 0 0 25 ( only 3DS available )
1 0 0 0 1 26
1 0 0 1 0 27 ( only 3DS available )
1 0 0 1 1 28
1 0 1 0 0 reserved for 29
1 0 1 0 1 30
1 0 1 1 0 reserved for 31
1 0 1 1 1 32
1 0 0 0 0 reserved
TDQS Enable, RTT_NOM, ODIC (Output Driver Impedance Control), and DLL Enable are
not applicable to the DDR4 Palladium Model.
TRR (Target Row Refresh), RTT_WR and ASR (Array Self Refresh) are not applicable to
the DDR4 Palladium Model.
Mode register 3 is implemented in the DDR4 model. The model does not support bits
MR3[10:3].
4 3 2 1 0
Internal Temp Temp Maximum RFU
Vref Controlled Controlled Power
Monitor Refresh Refresh Down
Mode Range Mode
Mode register 4 is implemented in the DDR4 model. The model does not support PPR, Self
Refresh Abort, Internal Vref Monitor, Temperature Controlled Refresh Mode and Range, and
Maximum Power Down Mode.
RTT_PARK and ODT Input Buffer for Power Down are not applicable to the DDR4
Palladium Model.
1 Enabled
Write operation: Either Data Mask or Write DBI can be enabled but both cannot be
enabled at the same time.
9. Features
The following table shows a list of features and feature support for the DDR4 model:
1. Assert RESET
2. De-assert RESET
3. Start clocks
4. Wait for CKE to asserted
5. Write to all seven Mode Registers
6. Issue ZQC command
Generally there is no ordering required for step 5 but all mode registers need to be written.
The model requires that these steps are performed in the correct sequence in order to
complete initialization. The model will not respond to any others commands unless this
sequence is completed. If initialization sequence needs to be bypassed the init_done
signal may be forced to high, and mode registers can be set by forcing signals MR0, MR1,
MR2, MR3, MR4, MRS5, MR6.
11. Limitations
Currently the DDR4 model does not support the following features, as well as others listed
as unsupported in the Features section of this user guide:
For writes to a DDR memory, industry datasheets show each DQS edge centered within
the corresponding valid period (v0, v1, v2, etc.) of DQ, as in the following diagram.
V0 V1 V2 V3
DQ
DQS
For DDR models provided by Cadence for Palladium, if the design drives DQ and DQS
signals with the above timing, the DDR memory will behave correctly. However, to obtain
this timing in Palladium, the fastest design clock must toggle twice as frequently as the
DQS signal. If this faster clock is not needed for any other reason, the presence of the
faster clock will usually cause an unnecessary 2X slowdown in emulation speed. To
eliminate the need for a faster clock, you can have the design generate each DQS edge at
the end of the corresponding DQ valid period (rather than the middle), as in the following
diagram:
V0 V1 V2 V3
DQ
DQS
Note that the first DQS edge is at the *end* of first valid DQ, not at the beginning.
For reads from the DDR model, the DDR model will drive DQ and DQS with the first DQS
edge at the *beginning* of the first valid data, not at the end:
V0 V1 V2 V3
DQ
DQS
The DDR model behaves this way to conform with industry datasheets for DDR memories.
The design reading the data from the DDR model must delay the DQS signal, and use the
delayed-DQS signal to sample the DQ. A delay of one Q_FDP0B should work fine, even in
CAKE 1X mode. If you are using CAKE 1X mode and the DDR clock is the fastest design
clock, the DQ signal will change twice per FCLK, and the Q_FDP0B delaying DQS will
provide one-half FCLK delay, so that each delayed-DQS edge is at the end of the
corresponding data valid period.
To delay the DQS signal, a commonly used approach is to create a special pad cell for
DQS, that has a Q_FDP0B delay cell inserted on the path that leads from the DDR
memory into the design.
The user may insert delays into pad cells (or elsewhere in the design) using the below
code example which leverages ixc_pulse, an internal primitive that can be used to access
FCLK and to create controlled delay, for IXCOM flow and leverages the Q_FDP0B
primitive for delay generation in the Classic ICE flow. For more detailed information about
ixc_pulse please reference the UXE User Guide section called Generating Pulses. There
is no need for the user to define IXCOM_UXE for the Verilog macro; it is predefined for the
user in IXCOM flow. Note that in UXE 13.1.0 and prior the equivalent pulse generating
function was named axis_pulse.
reg out_delay;
`ifdef IXCOM_UXE
wire VCC=1'b1;
ixc_pulse #(1)(Fclk,VCC);
always @(posedge Fclk)
out_delay <= in;
`else
Q_FDP0B fclk_dly (.D(in), .Q(out_delay));
`endif
endmodule
13. DIMMs
13.1. Configurations
Model Name Memory Data Registered/ Bank Bank Row Column Ranks
Size Width Unbuffered Group Address Address Address
jedec_ddr4_2GB_72_rdimm 2GB x72 RDIMM BG[1:0] BA[1:0] A[13:0] A[9:0] 1
jedec_ddr4_2GB_72_udimm 2GB x72 UDIMM BG[1:0] BA[1:0] A[13:0] A[9:0] 1
jedec_ddr4_2GB_64_rdimm 2GB x64 RDIMM BG[1:0] BA[1:0] A[13:0] A[9:0] 1
jedec_ddr4_2GB_64_udimm 2GB x64 UDIMM BG[1:0] BA[1:0] A[13:0] A[9:0] 1
jedec_ddr4_4GB_72_rdimm 4GB x72 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
jedec_ddr4_4GB_72_udimm 4GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
jedec_ddr4_4GB_64_rdimm 4GB x64 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
jedec_ddr4_4GB_64_udimm 4GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
jedec_ddr4_8GB_72_rdimm 8GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
jedec_ddr4_8GB_72_udimm 8GB x72 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
jedec_ddr4_8GB_64_rdimm 8GB x64 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
jedec_ddr4_8GB_64_udimm 8GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
jedec_ddr4_16GB_72_rdimm 16GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
jedec_ddr4_16GB_72_udimm 16GB x72 UDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
jedec_ddr4_16GB_64_rdimm 16GB x64 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
jedec_ddr4_16GB_64_udimm 16GB x64 UDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
jedec_ddr4_16GB_2r_72_rdimm 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
jedec_ddr4_16GB_2r_72_udimm 16GB x72 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
jedec_ddr4_16GB_2r_64_rdimm 16GB x64 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
jedec_ddr4_16GB_2r_64_udimm 16GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
jedec_ddr4_32GB_2r_72_rdimm 32GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A{9:0] 2
jedec_ddr4_32GB_2r_72_udimm 32GB x72 UDIMM BG[1:0] BA[1:0] A[16:0] A{9:0] 2
jedec_ddr4_32GB_2r_64_rdimm 32GB x64 RDIMM BG[1:0] BA[1:0] A[16:0] A{9:0] 2
jedec_ddr4_32GB_2r_64_udimm 32GB x64 UDIMM BG[1:0] BA[1:0] A[16:0] A{9:0] 2
mta4atf25664az 2GB x64 UDIMM BG[0] BA[1:0] A[14:0] A[9:0] 1
mta4atf51264az 4GB x64 UDIMM BG[0] BA[1:0] A[15:0] A[9:0] 1
mta8atf51264az 4GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
mta9asf51272az 4GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
mta9asf51272pz 4GB x72 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
mt8atf1g64az 8GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
mta16atf1g64az 8GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
mta18adf1g72pz 8GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
mta18asf1g72pdz 8GB x72 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
mta18asf1g72pz 8GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
mta16atf2g64az 16GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
mta18asf2g72az 16GB x72 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
mta18adf2g72pz 16GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
mta18asf2g72pz 16GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
mta18asf2g72pdz 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
mta36asf2g72pz 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
mta36ads4g72pz 32GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 2
mta36asf4g72pz 32GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 2
m393a5143db0 4GB x72 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
m393a1g40db0 8GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
m393a1g40db1 8GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
m393a1g43db0 8GB x72 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m393a1g43db1 8GB x72 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m393a1g40eb1 8GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
m393a1g43eb1 8GB x72 RDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m393a1k43bb0 8GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
m393a2g40db0 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
m393a2g40db1 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
Model Name Memory Data Registered/ Bank Bank Row Column Ranks
Size Width Unbuffered Group Address Address Address
m393a2g40eb1 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
m393a2k40bb0 16GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
m393a2k40bb1 16GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 1
m393a2k43bb1 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
m393a4k40bb0 32GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 2
m393a4k40bb1 32GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 2
m393a8g40d40 64GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2 x 4H
m393a8k40d21 64GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 2 x 2H
m393aak40d41 128GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 2 x 4H
m392a2g40dm0 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
m392a2k43bb0 16GB x72 RDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
m392a4k40bm0 32GB x72 RDIMM BG[1:0] BA[1:0] A[16:0] A[9:0] 2
m474a1g43db0 8GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m474a1g43db1 8GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m474a1g43eb1 8GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m474a2k43bb1 16GB x72 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
m471a5644eb0 2GB x64 UDIMM BG[1:0] BA[1:0] A[13:0] A[9:0] 1
m471a5143db0 4GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
m471a5143eb0 4GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
m471a5143eb1 4GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
m471a1g43db0 8GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m471a1g43eb1 8GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m471a1k43bb0 8GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
m471a1k43bb1 8GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
m471a2k43bb1 16GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
m391a5143eb1 4GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
m391a1g43db0 8GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m391a1g43db1 8GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m391a1g43eb1 8GB x72 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
m391a2k43bb1 16GB x72 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
M378a5644eb0 2GB x64 UDIMM BG[1:0] BA[1:0] A[13:0] A[9:0] 1
M378a5143db0 4GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
M378a5143eb1 4GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 1
M378a1g43db0 8GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
M378a1g43eb1 8GB x64 UDIMM BG[1:0] BA[1:0] A[14:0] A[9:0] 2
M378a1k43bb1 8GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 1
M378a2k43bb1 16GB x64 UDIMM BG[1:0] BA[1:0] A[15:0] A[9:0] 2
There are a few parameters that can be set to adjust the configuration of a model. One
limitation is the sum of row_addr_width, bank_grp_width, bank_addr_width, and col_addr_width
should be less than 31 because of the 1G address limit in IXCOM. Please see Large Memory
Support section if the sum exceeds 30.
Here is a list of parameters:
DQ Map Connector – bit within nibble DQ Map Connector – bit within nibble
Index 0 1 2 3 Index 0 1 2 3
(Hex) SDRAM bit (Hex) SDRAM bit
0x01 0 1 2 3 0x21 4 5 6 7
0x02 0 1 3 2 0x22 4 5 7 6
0x03 0 2 1 3 0x23 4 6 5 7
0x04 0 2 3 1 0x24 4 6 7 5
0x05 0 3 1 2 0x25 4 7 5 6
0x06 0 3 2 1 0x26 4 7 6 5
0x07 1 0 2 3 0x27 5 4 6 7
0x08 1 0 3 2 0x28 5 4 7 6
0x09 1 2 0 3 0x29 5 6 4 7
0x0A 1 2 3 0 0x2A 5 6 7 4
0x0B 1 3 0 2 0x2B 5 7 4 6
0x0C 1 3 2 0 0x2C 5 7 6 4
0x0D 2 0 1 3 0x2D 6 4 5 7
0x0E 2 0 3 1 0x2E 6 4 7 5
0x0F 2 1 0 3 0x2F 6 5 4 7
0x10 2 1 3 0 0x30 6 5 7 4
0x11 2 3 0 1 0x31 6 7 4 5
0x12 2 3 1 0 0x32 6 7 5 4
0x13 3 0 1 2 0x33 7 4 5 6
0x14 3 0 2 1 0x34 7 4 6 5
0x15 3 1 0 2 0x35 7 5 4 6
0x16 3 1 2 0 0x36 7 5 6 4
0x17 3 2 0 1 0x37 7 6 4 5
0x18 3 2 1 0 0x38 7 6 5 4
The index values are provided in spd byte number 60 to 77 or 3Ch to 4Dh. To enable this
mapping in the Palladium DIMM model, a data file needs to be preloaded into an array in
the spd eeprom.
@3c
2b
03
15
27
30
18
17
25
09
34
10
24
27
05
31
0a
12
2e
These values, along with other parameters, may also be preloaded into the SPD EEPROM
memory at the following path:
Rank mapping is defined by bits 7 and 6 of the DQ Map Index value as described in
JEDEC’s DDR4 SPD Contents Master Specification JC-45-2220.01
(ddr4_spd_spec_10_22_2012.pdf). Rank mapping defines the connectivity between bits
in different package ranks (even and odd ranks).
DQ0 DQ1
DQ1 DQ0
00 DQ2 DQ3
DQ3 DQ2
DQ4 DQ5
DQ5 DQ4
DQ6 DQ7
DQ7 DQ6
DQ8:DQ31 TBD
01 Reserved
10 Reserved
11 Reserved
The DIMM models are compiled as single rank by default which does not have address
mirroring enabled. If an odd rank or a multiple rank model is needed please request it
from customer support. Or, the user may change the following line in the .vp file and
recompile for an odd rank.
The memory models are currently provided in one format: an encrypted RTL file(s) (*.vp)
that targets use in either the IXCOM flow or in the ICE flow. The encrypted RTL (*.vp)
file(s) must be synthesized along with other design code prior to acceleration / emulation.
Shown below are some simplified commands for compiling the base DDR4 models in
IXCOM flow and ICE flow.
NOTE: The DDR4 family has several variations and configurations. Consult the
discussion below in this section and the sections titled “Base Model Filelist” and “DIMM
Model Filelist” to know what files are necessary for building the selected DDR4 or
DDR4DIMM part. Be sure to include all the needed files when compiling the model into a
design. Alternatively, the user may include all the files listed in the filelist and let the
synthesis tool discard the unneeded modules.
vavlog ../src/<model_name>.vp
It is also common for Palladium flows to require –keepRtlSymbol. This option enables the
HDL Compiler to keep original VHDL RTL symbols, such as “.”, whenever possible. In
other words, it maps VHDL RTL signal name a.b to the netlist entry, \a.b. Without this
modifier, the signal name would otherwise be converted to a_b in the netlist.
If the recommended compile script includes the aforementioned options, the user must
include them to avoid affecting functionality of the design.
For large memory models that exceed 30 bits of address width, include the files
mmp_gen_mem.vp and mmp_submem.vp in the build as shown below. These files will
incorporate the multiple core memory array generator into the build and the Verilog macro
MMP_LG_MEM_BITS will automatically be defined. See the section on Large Memory
Support for more details.
Below are shown some simple commands for compiling the standard DDR4 DIMM models
in the IXCOM flow. Please notice that a few additional input files are needed.
For large memory DIMM models that exceed 30 bits of address width, include the files
mmp_gen_mem.vp, mmp_submem.vp, and ddr4_wide_lg_mem.vp in the vlan step of the
build as shown below. These files will incorporate the multiple core memory array
generator into the build and the Verilog macro MMP_LG_MEM_BITS will automatically be
defined. See the section on Large Memory Support for more details.
For 3DS RDIMM models that have 2 ranks per stack please add ddr4_rdimm_2rank.vp
and ddr4_rdimm.vp to the compilation step as follows:
The content of the hw.i file used in the example runtime command shown above may have
commands such as these shown below:
debug .
host .
xc xt0 zt0 run
run
exit
The above examples are intended to show the difference between compiling DDR4
models and DDR4 DIMM models. Please see the UXE or VXE user guide for more details
on the IXCOM flow.
*Please see Large Memory Support section for more details. Files mmp_gen_mem.vp
and mmp_submem.vp are located in the sdram/common sub-directory under the MMP
installation.
For additional details about the SWI product at large please consult the SWI product
documentation which includes a user guide. This documentation can be accessed via
[Link] and is located on the Product Pages/Product Manuals link where SWI
13.1 is located with other Functional Verification products.
The user of the SWI solution who is integrating this core model to the corresponding SWI Smart
Memory component side should define the Verilog macro “MMP_SM” to enable the Smart
Memory interface. This will enable the portion of the interface that resides in the MMP model
core, thus completing access to the implemented SWI Smart Memory functionality.
The SWI Smart Memory interface includes the signals shown in Table 2: SWI Smart Memory
Interface Signals. It is outside the MMP scope to treat the integration of the MMP model into a
hybrid solution. For additional details, please consult the SWI documentation and other Hybrid
Solution documentation.
The Smart Memory interface includes a user adjustable parameter that passes user data from
wrapper layers that are external to this MMP model into MMP model. The single 64-bit
parameter is subdivided by field to accommodate data as shown in Table 3: SWI Smart
Memory User Adjustable Parameters below. This parameter defaults to a value of ‘0’ and is
managed by the SWI product Smart Memory component. For additional details, please consult
the SWI documentation and other Hybrid Solution documentation.
PARAMETER DESCRIPTION
parameter [63:0] SMConfigSpecific_UserData Smart Memory User Data Passing
FIELD DESCRIPTION
[2:0] Used for specifying device/channel/subpart
[63:61] Extension value of total_addr_bits
The Smart Memory interface includes non-user adjustable localparams and parameters as
shown in Table 4: SWI Smart Memory Non-User Adjustable Parameters below. Note that a
localparam of the same name (total_addr_bits) but of different size is part of the standard MMP
core model.
The SWI Smart Memory interface has dependencies on the inclusion of external file(s). For
additional details about purpose and content of any Smart Memory related external files, please
consult the SWI documentation and other Hybrid Solution documentation.
`ifdef MMP_SM
`include "cdn_sm_mapDRBCToLinAdr.vh"
`endif
Error Correction Code (ECC) allows single bit errors to be corrected and other bit errors to be
detected, thus improving high frequency operation reliability and data accuracy.
While the MMP DDRx models do not include any ECC functionality or handling, the user can
arrange to “support” ECC on the DDR model first by using a model with a 72bit datapath and,
additionally, by artificially injecting errors using some external mechanism, if such is needed. To
word this differently, MMP models can often be found that interface with the memory controller
data path, thus satisfying connectivity requirements. This arrangement might support a user
who needs to test ECC related error conditions in emulation. The following paragraphs provide
some details for consideration.
For DDRx models, ECC is performed by the controller. The controller calculates an ECC value
and writes it to the upper 8 bits of a 72bit DIMM. When the controller writes data or when it
reads data, it re-calculates the ECC based on current read data then checks the resulting value
against the read ECC stored in the upper 8 bits. If the re-calculated value is equivalent to the
stored value, then there is no error.
So, to support ECC in an MMP model during acceleration or emulation, the user first needs a
72bit data path to provide the added ports and data path for handling the upper 8 bits of ECC
above and beyond the standard 64bit data path. In other words, a normal DDR memory has a
64bit data bus where the ECC memory will have a 72bit data bus and 9 bits for DQS and DM.
For most scenarios, ones where the user will not enable ECC function in their controller and will
not inject errors, this operation is compatible for use with MMP models. The expanded ports and
data path constitutes all of the support that MMP models can provide toward ECC handling.
There are two potential paths for achieving the 72bit data bus. The first is to select and use a
72bit DIMM. Please review the DIMM pages of the MMP catalogue for an appropriate 72bit
DIMM. If an appropriate 72bit part cannot be found, the user may contact Cadence support to
initiate a request for a 72bit data path version of the model of interest. A second approach to
consider for some models is to expand the data path of a smaller width data bus to 72bits. The
safest way to expand the data path to 72bits is to modify the data width parameter to 72 in the
standard 64bit model. The user can set the parameter data_bits to 72 when instantiating the
model. Not all models, however, have a data_bits parameter available for configuration.
Next, the user considers the error injection aspect. There is no capability in MMP models for
error injection. MMP models are provided as system level emulation models and not as
verification IP. For specifically testing error conditions, the user needs to work around the gap.
One alternative that the user might consider is to corrupt memory data in order to mimic data
corruption. In this scenario, the user can execute the xeDebug memory command to write some
incorrect data to the array. For example, the following command will corrupt one location in an
array:
tb_top.rtl_module.ddr4_wide_i0.memcore
In scenarios with array address space that is greater than 30 bits, there are multiple arrays
which are mapped using distinctly different hierarchical naming. The hierarchical path for the
array names associated with each core memory array of the large memory are reported as
output to the “memory –list” command or can be viewed in the dbFiles/xcva_top_et5mpart file.
For example, with a 32 bit address the user will see 4 arrays with naming as follows:
Multiple data files for preloading each separate memory array are also required. In the example
above, there will be four data files needed to preload the entire large memory.
Likewise, multiple memory –load commands are needed to preload the large memory of this
example. An xeDebug preload example for the case above will look as follows:
Another example, 2 ranks with 31 bit address the user will see 4 arrays with naming as follows:
tb_jedec_ddr4_32GB_2r_64_udimm.mmp0.ddr4_udimm_64_i0.ddr4_wide_i0.mem1.\multiple.array_0_.u1 .memcore
addresses 0 .. 1G - 1
tb_jedec_ddr4_32GB_2r_64_udimm.mmp0.ddr4_udimm_64_i0.ddr4_wide_i0.mem1.\multiple.array_1_.u1 .memcore
addresses 1G .. 2G - 1
tb_jedec_ddr4_32GB_2r_64_udimm.mmp0.ddr4_udimm_64_i1.ddr4_wide_i0.mem1.\multiple.array_0_.u1 .memcore
addresses 0 .. 1G - 1
tb_jedec_ddr4_32GB_2r_64_udimm.mmp0.ddr4_udimm_64_i1.ddr4_wide_i0.mem1.\multiple.array_1_.u1 .memcore
addresses 1G .. 2G - 1
An xeDebug preload example for the case above will look as follows:
For DIMMs, the following table shows the same information per rank.
jedec_ddr4_2GB_72_rdimm 2 2 14 10 28 1
jedec_ddr4_2GB_72_udimm 2 2 14 10 28 1
jedec_ddr4_2GB_64_rdimm 2 2 14 10 28 1
jedec_ddr4_2GB_64_udimm 2 2 14 10 28 1
jedec_ddr4_4GB_72_rdimm 2 2 15 10 29 1
jedec_ddr4_4GB_72_udimm 2 2 15 10 29 1
jedec_ddr4_4GB_64_rdimm 2 2 15 10 29 1
jedec_ddr4_4GB_64_udimm 2 2 15 10 29 1
jedec_ddr4_8GB_72_rdimm 2 2 16 10 30 1
jedec_ddr4_8GB_72_udimm 2 2 16 10 30 1
jedec_ddr4_8GB_64_rdimm 2 2 16 10 30 1
jedec_ddr4_8GB_64_udimm 2 2 16 10 30 1
jedec_ddr4_16GB_72_rdimm 2 2 17 10 31 2
jedec_ddr4_16GB_72_udimm 2 2 17 10 31 2
jedec_ddr4_16GB_64_rdimm 2 2 17 10 31 2
jedec_ddr4_16GB_64_udimm 2 2 17 10 31 2
jedec_ddr4_16GB_2r_72_rdimm 2 2 16 10 30 1
jedec_ddr4_16GB_2r_72_udimm 2 2 16 10 30 1
jedec_ddr4_16GB_2r_64_rdimm 2 2 16 10 30 1
jedec_ddr4_16GB_2r_64_udimm 2 2 16 10 30 1
jedec_ddr4_32GB_2r_72_rdimm 2 2 17 10 31 2
jedec_ddr4_32GB_2r_72_udimm 2 2 17 10 31 2
jedec_ddr4_32GB_2r_64_rdimm 2 2 17 10 31 2
jedec_ddr4_32GB_2r_64_udimm 2 2 17 10 31 2
mta4atf25664az 1 2 15 10 28 1
mta4atff51264az 1 2 16 10 29 1
mta8atf51264az 2 2 15 10 29 1
mta9asf51272az 2 2 15 10 29 1
mta9asf51272pz 2 2 15 10 29 1
mta8atf1g64az 2 2 16 10 30 1
mta16atf1g64az 2 2 15 10 29 1
mta18adf1g72pz 2 2 16 10 30 1
mta18asf1g74pdz 2 2 15 10 29 1
mta18asf1g72pz 2 2 16 10 30 1
mta16atf2g64az 2 2 16 10 30 1
mta18asf2g72az 2 2 16 10 30 1
mta18adf2g72pz 2 2 17 10 31 2
mta18asf2g72pz 2 2 17 10 31 2
mta18asf2g72pdz 2 2 16 10 30 1
mta36asf2g72pz 2 2 16 10 30 1
mta36ads4g72pz 2 2 17 10 31 2
mta36asf4g72pz 2 2 17 10 31 2
m393a5143db0 2 2 15 10 29 1
m393a1g40db0 2 2 16 10 30 1
m393a1g40db1 2 2 16 10 30 1
m393a1g43db0 2 2 15 10 29 1
m393a1g43db1 2 2 15 10 29 1
m393a1g40eb1 2 2 16 10 30 1
m393a1g43eb1 2 2 15 10 29 1
m393a1k43bb0 2 2 16 10 30 1
m393a2g40db0 2 2 16 10 30 1
m393a2g40db1 2 2 16 10 30 1
m393a2g40eb1 2 2 16 10 30 1
m393a2k40bb0 2 2 17 10 31 2
m393a2k40bb1 2 2 17 10 31 2
m393a2k43bb1 2 2 16 10 30 1
m393a4k40bb0 2 2 17 10 31 2
m393a4k40bb1 2 2 17 10 31 2
m393a8g40d40 2 2 16 10 30 1
m393a8k40d21 2 2 17 10 31 2
m393aak40d41 2 2 17 10 31 2
m392a2g40dm0 2 2 16 10 30 1
m392a2k43bb0 2 2 16 10 30 1
m392a4k40bm0 2 2 17 10 31 2
m474a1g43db0 2 2 15 10 29 1
m474a1g43db1 2 2 15 10 29 1
m474a1g43eb1 2 2 15 10 29 1
m474a2k43bb1 2 2 16 10 30 1
m471a5644eb0 2 2 14 10 28 1
m471a5143db0 2 2 15 10 29 1
m471a5143eb0 2 2 15 10 29 1
m471a5143eb1 2 2 15 10 29 1
m471a1g43db0 2 2 15 10 29 1
m471a1g43eb1 2 2 15 10 29 1
m471a1k43bb0 2 2 16 10 30 1
m471a1k43bb1 2 2 16 10 30 1
m471a2k43bb1 2 2 16 10 30 1
m391a5143eb1 2 2 15 10 29 1
m391a1g43db0 2 2 15 10 29 1
m391a1g43db1 2 2 15 10 29 1
m391a1g43eb1 2 2 15 10 29 1
m391a2k43bb1 2 2 16 10 30 1
M378a5644eb0 2 2 14 10 28 1
M378a5143db0 2 2 15 10 29 1
M378a5143eb1 2 2 15 10 29 1
M378a1g43db0 2 2 15 10 29 1
M378a1g43eb1 2 2 15 10 29 1
M378a1k43bb1 2 2 16 10 30 1
M378a2k43bb1 2 2 16 10 30 1
DIMMs that need multiple arrays per rank use the memory array generator. To locate the
hierarchical naming to use in preloading and other memory references, the user can examine
the output from the “memory –list” command or review the dbFiles/xcva_top_et5mpart file.
The user also needs to compile with files mmp_gen_mem.vp, mmp_submem.vp, and
ddr4_wide_lg_mem.vp instead of ddr4_wide.vp. Please see the section named “IXCOM
Compilation” for an example. Below is an example for the vlan step using mmp_gen_mem.vp,
mmp_submem.vp, and ddr4_wide_lg_mem.vp:
Note: Files mmp_gen_mem.vp and mmp_submem.vp are located in the sdram/common sub-
directory under the MMP installation.
option. The user should note that more core memory arrays will be generated when the
MMP_LG_MEM_BITS bit setting is reduced, which may then require additional data files for
preloading the memory.
18. Debugging
This model has several debugging options, techniques and tips that may assist the user
may use in isolating a problem.
• For issues that may not be DDR4 specific please review the Memory Model Portfolio
FAQ for All Models User Guide.
• Debug signals: The following signals can be monitored to help detect incorrect
command sequence:
o err_init_cmd: Commands other than MRS, ZQC, NOP issued during init
o err_ba_mrs: BG[1] is not '0' during MRS
o err_mrs: MRS issued without all the banks precharged
o err_ref: REF issued without all the banks precharged
o err_act: ACT issued to a bank not precharged
o err_zqc: ZQC issued without all the banks precharged
o err_wr/err_rd: WRITE/READ issued to a bank not active
o bad_rd/bad_wr_0: WRITE/READ doesn't satisfy tCCD = 4 cycles
o bad_wr_1: Read to write violation
• Golden waveform: A package with a reference waveform is available which shows the
following command sequence(s):
• Debug Display: This MMP memory model has available a built-in debug methodology
called MMP Debug Display that is based on the Verilog system task $display. Please
see the Palladium Memory Model Debug Display User Guide in the release docs
directory for additional information.
This MMP model supports manual configuration by accompanying the model mode
register or configuration register declarations with synthesis directives, such as keep_net
directives, that instruct the compiler to ensure that the relevant nets remain available for
runtime forcing. For a general description of this support please see the user guide in the
MMP release with path and filename docs/MMP_FAQ_for_All_Models.pdf.
The following table lists the internal register path and naming along with the specification
or datasheet naming for model mode registers or configuration registers that are
accompanied by keep_net synthesis directives in support of such manual configuration.
ONLY writeable configuration registers or fields are supported thusly. Please read the
relevant datasheet for details about individual register behavior and mapping to fields.
The following table shows the revision history for this document.