Using packet classifier algorithms in packet processing network systems leads to rapid processing of packets. These algorithms are so important especially in processor equipment used in internet backbones. Ternary content addressable memories (TCAM) are used to do parallel search in hardware implementation of these algorithms. Despite the accessibility of high-speed search, one of main problems to use TCAM is its high power supply consumption. In this study, a new technique is provided to reduce memory consumption in TCAM blocks which are used in hardware classifier. In classifier architecture, first, decision tree is created and classifier rules are distributed among its leaf nodes. Since each leaf of the tree corresponds to one TCAM block, rules are included in different blocks of TCAM corresponding to tree structure, in the second stage. In this architecture, a supplementary TCAM Block is used as general block. This block consists of repeated and common rules among leafs of decision tree; therefore, it is clear that to classify each packet this block’s structure is searched. Recent architectures have encountered memory waste and a considerable increase in power consumption due to unbalanced distribution of rules in main TCAM blocks and unexpected increase in the number of repeated rules in the general block. In this study, a new algorithm is offered to optimize rule distribution in TCAM blocks in the first stage of packet classification. The key idea in this study is to select some bits for cutting in geometric space which guarantees to distribute rules equally and to reduce repetition of common rules in main blocks. Efficiency of the proposed architecture which uses intelligent cuts has been compared with recent architectures. In this comparison, synthetic rules and packets are generated by Classbench tool. Comparing results shows that the proposed method can distribute rules in TCAM block more balanced than competitor architectures and at the same time, reduces